Skip to main content
Glama
Ownership verified

Server Details

Screenshot any website with one API call PNG, JPEG, WebP, or PDF. Custom viewports, device emulation, ad blocking, dark mode, and smart caching.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

8 tools
batch_screenshotsAInspect

Create a batch screenshot job for multiple URLs (1-50). Returns immediately with a job ID. Use get_batch_status to poll for results. All URLs share the same screenshot options. Each URL consumes one credit; failed URLs get credits rolled back.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlsYesArray of URLs to capture (1-50)
delayNoMilliseconds to wait after load (default: 0)
widthNoViewport width in pixels (default: 1280)
deviceNoDevice preset for emulation
formatNoOutput format (default: png)
heightNoViewport height in pixels (default: 800)
qualityNoImage quality (default: 90)
block_adsNoBlock ads (default: true)
dark_modeNoEnable dark mode (default: false)
full_pageNoCapture entire scrollable page (default: false)
user_agentNoCustom user agent
click_selectorNoCSS selector to click
hide_selectorsNoCSS selectors to hide
block_cookie_bannersNoRemove cookie banners (default: true)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains the asynchronous nature (returns immediately with job ID), polling requirement, credit consumption rules, and that all URLs share options. Annotations provide readOnlyHint=false and openWorldHint=true, but the description complements this with practical implementation details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: first states purpose and constraints, second explains the polling workflow, third covers credit rules. Each sentence earns its place by providing essential information not obvious from other fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 14 parameters and no output schema, the description provides good context about the asynchronous workflow, credit system, and relationship to other tools. It doesn't explain return values (no output schema), but covers the most important behavioral aspects adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 14 parameters thoroughly. The description adds minimal parameter semantics beyond the schema (e.g., 'All URLs share the same screenshot options'), but doesn't provide additional syntax or format details. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Create a batch screenshot job'), resource ('multiple URLs'), and scope ('1-50'). It distinguishes from sibling tools like 'take_screenshot' (single URL) and 'check_screenshot_cache' (cached results).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool ('for multiple URLs') and when to use an alternative ('Use get_batch_status to poll for results'). It also provides context about credit consumption and rollback for failed URLs, which helps guide usage decisions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_screenshot_cacheA
Read-only
Inspect

Check if a screenshot is already cached without capturing a new one. Does not count against your quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL to check cache status for (must start with http:// or https://)
formatNoOutput format to check cache for (default: png)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true; description reinforces this with 'without capturing a new one.' Critically adds quota behavior ('Does not count against your quota') not present in annotations, which is essential cost/rate-limit context for agent decision-making.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two precisely targeted sentences with zero waste. Front-loaded with core purpose (cache check), second sentence delivers critical quota information. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a 2-parameter, non-destructive utility. Covers existence check behavior and cost implications. Minor gap: could hint at return value (boolean/cache metadata) given no output schema exists, but adequate for tool complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for both 'url' (format requirements) and 'format' (default noted). Description adds no parameter-specific semantics, but with complete schema documentation, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Check if a screenshot is already cached' provides exact verb and resource. Explicitly distinguishes from sibling tool 'take_screenshot' by stating 'without capturing a new one,' clarifying this is a status check rather than generation operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear cost context ('Does not count against your quota') implying this should be used when avoiding quota consumption is desired. However, lacks explicit workflow guidance such as 'Use this before take_screenshot to avoid redundant captures' or conditions when to prefer over the sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

extract_contentA
Read-only
Inspect

Extract content from a web page. Returns structured data based on the extraction type. Supports: markdown (readable content), text (plain text), html (raw HTML), article (structured with title/author/excerpt), links (all page links), metadata (OG tags, title, description).

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL to extract content from (must start with http:// or https://)
typeNoExtraction type (default: markdown)
delayNoMilliseconds to wait after page load (default: 0)
selectorNoCSS selector to scope extraction to a specific element
block_adsNoBlock advertisements and trackers (default: true)
max_lengthNoMaximum content length in characters (default: 100000)
block_cookie_bannersNoRemove cookie consent banners (default: true)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true and openWorldHint=true, which the description does not contradict. The description adds value by specifying the return format ('structured data based on the extraction type') and listing extraction types, but it lacks details on behavioral traits like rate limits, authentication needs, or error handling. With annotations covering safety and scope, a 3 is appropriate as the description provides some additional context without rich behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by a concise list of extraction types. Every sentence earns its place by clarifying functionality without redundancy, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity with 7 parameters and no output schema, the description is reasonably complete by stating the return format and listing extraction types. However, it could improve by briefly mentioning the default behavior or common use cases to aid the agent further. Annotations provide safety and scope hints, but the description lacks details on output structure or error scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 7 parameters. The description adds minimal semantic value by mentioning the 'type' parameter's options and implying the 'url' parameter's purpose, but it does not provide additional meaning beyond what the schema already specifies. Baseline 3 is correct when the schema handles parameter documentation effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('extract content from a web page') and resource ('web page'), and distinguishes it from sibling tools by focusing on content extraction rather than screenshot-related operations or administrative functions. It explicitly lists the supported extraction types, making the scope unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool by listing the supported extraction types (e.g., 'markdown', 'article', 'links'), which helps differentiate use cases. However, it does not explicitly state when not to use it or name alternatives among sibling tools, such as when visual capture is needed versus content extraction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_batch_statusA
Read-only
Inspect

Get the status of a batch screenshot job. Poll this until status is 'completed' or 'failed'. Completed items include presigned download URLs valid for 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesThe batch job ID returned by batch_screenshots
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=false, indicating a safe, non-destructive read operation. The description adds valuable behavioral context beyond this: it specifies that polling is expected, describes terminal statuses ('completed' or 'failed'), and notes that completed items include 'presigned download URLs valid for 24 hours'. This enhances transparency about the tool's behavior and output characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured in two sentences. The first sentence states the core purpose, and the second provides critical usage and behavioral details. Every sentence earns its place with no wasted words, making it easy to parse and front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a status-checking operation with polling behavior), the description is largely complete. It covers the purpose, usage, and key behavioral traits. However, without an output schema, it doesn't detail the full structure of the response (e.g., specific status fields or error formats), leaving a minor gap. The annotations and schema coverage help compensate, but some context about return values could enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'job_id' parameter fully documented. The description does not add any additional meaning or semantics beyond what the schema provides (e.g., it doesn't clarify format or constraints). Thus, it meets the baseline of 3, as the schema carries the full burden of parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the status') and resource ('batch screenshot job'), distinguishing it from siblings like 'batch_screenshots' (which creates jobs) and 'check_screenshot_cache' (which checks cached results). It precisely defines the tool's function without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides usage guidance: 'Poll this until status is 'completed' or 'failed''. This indicates when to use the tool (for monitoring job progress) and implies alternatives (e.g., not for initial job creation, handled by 'batch_screenshots'). It effectively guides the agent on the tool's role in a workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_usageA
Read-only
Inspect

Get current month's screenshot usage statistics including screenshots used, limit, and remaining quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
monthNoMonth to query in YYYY-MM format (default: current month)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare this is read-only (readOnlyHint=true). The description adds valuable context about the specific data returned (screenshots used, limit, remaining quota) which compensates for the absence of an output schema. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with no wasted words. Information is front-loaded (action + resource + specific metrics) and appropriately sized for a simple read operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple single-parameter tool. The description proactively lists the expected return values (used, limit, quota) which compensates for the lack of a formal output schema. Could explicitly mention the month parameter allows historical queries.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with the 'month' parameter fully documented. Description reinforces the default behavior ('current month's') but adds minimal semantic detail beyond what the schema already provides, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Get') and resource ('screenshot usage statistics') with specific output fields listed (used, limit, quota). However, it frames the tool as specifically for the 'current month' despite the parameter allowing any month, and does not explicitly differentiate from siblings like check_screenshot_cache or take_screenshot.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus sibling tools (e.g., whether to check usage before taking screenshots). No 'when not to use' or alternative suggestions provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

manage_webhooksAInspect

Create, list, or delete webhooks for event notifications. Events: screenshot.completed (batch job done), quota.warning (80% used), quota.exceeded (100% used). Max 5 webhooks per account. Payloads are signed with HMAC-SHA256.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlNoWebhook endpoint URL (required for create)
actionYesAction to perform
eventsNoEvents to subscribe to (required for create)
webhook_idNoWebhook ID (required for delete and test)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies the three event types, the 5-webhook limit per account, and HMAC-SHA256 payload signing. While annotations indicate this is not read-only and not open-world, the description provides concrete operational details that help the agent understand what this tool actually does.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with three information-dense sentences. The first sentence states the core functionality, the second lists specific events, and the third provides important constraints. Every sentence earns its place with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no output schema but good annotations and comprehensive input schema, the description provides solid context about events, limits, and security. It could be more complete by mentioning response formats or error conditions, but it covers the essential operational aspects well given the available structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all parameters thoroughly. The description doesn't add parameter-specific information beyond what's in the schema. The baseline score of 3 is appropriate since the schema does the heavy lifting for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific actions (create, list, delete) on the resource (webhooks) with explicit purpose (event notifications). It distinguishes from sibling tools by focusing on webhook management rather than screenshot operations, usage checks, or content extraction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool (for event notifications like screenshot completion and quota warnings) and mentions the max 5 webhooks per account constraint. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sign_screenshot_urlA
Read-only
Inspect

Generate a signed URL for a screenshot that can be used without an API key. Useful for embedding screenshots in emails, documents, or sharing with third parties. Signing is free, rendering the URL consumes one credit. URLs expire after the specified duration.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL to capture (must start with http:// or https://)
delayNoMilliseconds to wait after load (default: 0)
widthNoViewport width in pixels (default: 1280)
deviceNoDevice preset for emulation
formatNoOutput format (default: png)
heightNoViewport height in pixels (default: 800)
qualityNoImage quality (default: 90)
block_adsNoBlock ads (default: true)
dark_modeNoEnable dark mode (default: false)
full_pageNoCapture entire scrollable page (default: false)
expires_inNoURL validity in seconds, 60-2592000 (default: 86400 = 1 day)
user_agentNoCustom user agent
click_selectorNoCSS selector to click
hide_selectorsNoCSS selectors to hide
block_cookie_bannersNoRemove cookie banners (default: true)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it discloses cost implications ('rendering the URL consumes one credit'), URL expiration behavior ('URLs expire after the specified duration'), and the free signing aspect. Annotations already indicate readOnlyHint=true (safe operation) and openWorldHint=false (deterministic), but the description enriches this with practical constraints. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded: three sentences with zero waste. The first sentence states the core purpose, the second provides usage context, and the third adds critical behavioral details (cost and expiration). Every sentence earns its place by adding distinct value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with rich annotations (readOnlyHint, openWorldHint) and full schema coverage, the description provides excellent contextual completeness. It covers purpose, usage scenarios, cost model, and expiration behavior. The main gap is lack of output schema, but the description implies the tool returns a signed URL, which is reasonably inferred. A 5 would require explicit output details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents all 15 parameters. The description doesn't add parameter-specific details beyond implying 'expires_in' controls URL validity. It mentions 'specified duration' generically but doesn't explain parameter interactions or defaults. Baseline 3 is appropriate since the schema carries the parameter documentation burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Generate') and resource ('signed URL for a screenshot'), distinguishing it from siblings like 'take_screenshot' (which likely requires an API key) and 'check_screenshot_cache' (which checks existing screenshots). It explicitly mentions the key differentiator: 'can be used without an API key.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Useful for embedding screenshots in emails, documents, or sharing with third parties.' It also distinguishes from alternatives by noting it generates signed URLs for external use, unlike 'take_screenshot' which might return raw image data. The cost implication ('Signing is free, rendering the URL consumes one credit') further clarifies usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

take_screenshotA
Read-only
Inspect

Capture a screenshot of any website. Returns the image as PNG, JPEG, WebP, or PDF. Supports device emulation (iPhone, Pixel, iPad), dark mode, ad blocking, cookie banner removal, full-page capture, and custom viewports.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL to capture (must start with http:// or https://)
delayNoMilliseconds to wait after page load (default: 0)
widthNoViewport width in pixels (default: 1280)
deviceNoDevice preset for mobile/tablet emulation
formatNoOutput format (default: png)
heightNoViewport height in pixels (default: 800)
qualityNoImage quality for JPEG/WebP, 1-100 (default: 90)
block_adsNoBlock advertisements and trackers (default: true)
dark_modeNoEnable dark mode CSS emulation (default: false)
full_pageNoCapture entire scrollable page (default: false)
click_selectorNoCSS selector to click before capture
hide_selectorsNoComma-separated CSS selectors to hide before capture
block_cookie_bannersNoRemove cookie consent banners (default: true)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnly/openWorld), description adds valuable behavioral context: output formats (PNG/JPEG/WebP/PDF), device emulation specifics, and capture modalities (full-page, dark mode). Does not mention rate limits or caching behavior, but substantially augments the safety profile in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-formed sentences with zero waste: purpose declaration, output specification, and capability summary. Front-loaded with clear intent; every clause earns its place by conveying distinct information not redundant with schema or annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

compensates for missing output schema by specifying return formats (PNG/JPEG/WebP/PDF). Given 13 well-documented parameters, describes key capabilities adequately. Minor gap: could acknowledge external network dependency implied by openWorldHint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, description adds semantic translation by grouping parameters into functional features: 'device emulation (iPhone, Pixel, iPad)' maps enum values to concepts, and 'ad blocking/cookie banner removal' clarifies intent of boolean flags beyond their schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb 'Capture' + resource 'screenshot of any website'. Implicitly distinguishes from sibling 'check_screenshot_cache' by focusing on generation/capture rather than retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States what the tool does but provides no explicit guidance on when to use 'take_screenshot' versus 'check_screenshot_cache' for cached results, or cost/rate limit implications. Usage is implied but not directive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources