Browsershot MCP
Server Quality Checklist
- Disambiguation5/5
Each tool has a clearly distinct purpose: page_info extracts metadata without rendering, screenshot captures a single viewport, and screenshot_compare generates multiple viewports for responsive testing. No functional overlap or ambiguity exists.
Naming Consistency3/5Mixed structural patterns: page_info follows noun_noun (describing output data), while screenshot and screenshot_compare use noun or noun_verb patterns (describing actions). All use snake_case, but semantic consistency is lacking compared to a uniform verb_noun convention.
Tool Count4/5Three tools is lean but appropriate for a focused screenshot utility. The set covers auditing (page_info), basic capture (screenshot), and responsive validation (screenshot_compare) without unnecessary proliferation.
Completeness4/5Core screenshot workflow is well-covered including metadata inspection and multi-device responsive testing. Minor gaps might include PDF generation or granular viewport control (e.g., single mobile screenshot without the full comparison set), but the essential surface supports complete screenshot tasks.
Average 3.9/5 across 3 of 3 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 3 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry full behavioral disclosure. It successfully states the return value ('image file path'), but omits other behavioral traits like error handling (timeouts, invalid URLs), browser engine details (though 'Puppeteer' appears in schema), or idempotency. It compensates minimally by mentioning the Read tool integration.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences that are all essential: purpose/return, use cases, and output consumption. Front-loaded with the core action, zero redundancy, and appropriately dense for the tool complexity. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 9-parameter browser automation tool with no output schema and no annotations, the description covers the critical missing output information (file path return) and basic usage contexts. However, it lacks discussion of failure modes, network requirements, or JavaScript execution behavior that would be expected for a complex browser tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds no explicit parameter guidance, though the use cases ('responsive layouts') implicitly guide toward the device/width/height parameters. No additional syntax or format details are provided beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific action ('Take a browser screenshot') and target ('URL'), and distinguishes from sibling page_info by emphasizing visual verification ('visually verify frontend changes', 'debug CSS issues'). However, it does not explicitly contrast with screenshot_compare, which could lead to confusion about when to use simple capture versus visual diffing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear positive use cases ('visually verify frontend changes, check responsive layouts, or debug CSS issues') and mentions the Read tool for consuming output. Lacks explicit negative guidance ('do not use when...') or selection logic for when to prefer screenshot_compare over this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Compensates by listing specific metadata fields returned (title, viewport, scroll height, etc.) since no output schema exists. However, omits behavioral details like page loading strategy, JavaScript execution, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first front-loaded with capability list, second provides usage context. Every word earns its place; appropriately sized for tool complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given single parameter and lack of output schema, description adequately compensates by enumerating return metadata fields. Addresses sibling tool relationship. Minor gap: no mention of error handling or authentication requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% ('URL to inspect'), establishing baseline 3. Description implies the URL should be a web page (given 'page metadata' context) but adds minimal syntax or format guidance beyond the schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') and resource ('page metadata') with explicit enumerated return values (title, viewport, fonts, etc.). Explicitly distinguishes from sibling 'screenshot' tool by contrasting with 'without a full screenshot'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
States use case ('auditing a page') and implicitly positions against screenshot alternative ('without a full screenshot'). Lacks explicit 'when not to use' guidance, but context is clear enough for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of disclosure. It successfully explains the output format ('Returns file paths') and reveals the default behavioral configuration (specific pixel widths). It could improve by clarifying if this creates local files or temporary resources, but it covers the essential return contract missing from the output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of exactly two sentences with zero waste: the first establishes the action and purpose, while the second covers return values and defaults. Every word earns its place and critical information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of both annotations and an output schema, the description adequately compensates by explaining what the tool returns ('file paths'). With 100% schema coverage for inputs, the description provides sufficient context for invocation, though it could briefly mention error handling or file persistence behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage (baseline 3), the description adds valuable semantic meaning by categorizing the default widths as 'mobile (375px), tablet (768px), and desktop (1440px)', which helps the agent understand the intent behind the raw numeric defaults.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Take screenshots'), the unique approach ('at multiple viewport widths'), and the goal ('to compare responsive layouts'). This effectively distinguishes it from the sibling 'screenshot' tool which likely captures a single viewport.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the use case ('to compare responsive layouts') but does not explicitly state when to use this tool versus the sibling 'screenshot' tool, nor does it provide explicit when-not-to-use guidance or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/kjaiswal/browsershot-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server