PageShot
Server Details
Free screenshot and webpage capture API. Capture full-page screenshots, specific elements, or PDF snapshots of any URL. No API keys required.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
4 toolsbatch_screenshotAInspect
Capture screenshots of multiple URLs in a single request. Returns base64-encoded images. Maximum 5 URLs.
| Name | Required | Description | Default |
|---|---|---|---|
| urls | Yes | Array of URLs to screenshot (max 5) | |
| width | No | Viewport width (default: 1280) | |
| format | No | Image format (default: png) | |
| height | No | Viewport height (default: 720) | |
| locale | No | BCP 47 locale tag for the browser context (e.g., "en-US") | |
| latitude | No | Geolocation latitude (-90 to 90) | |
| timezone | No | IANA timezone for the browser context (e.g., "America/New_York") | |
| dark_mode | No | Enable dark mode emulation | |
| full_page | No | Capture full page scroll height | |
| longitude | No | Geolocation longitude (-180 to 180) | |
| wait_until | No | Page load strategy (default: domcontentloaded). Use "networkidle" for SPAs. | |
| disable_javascript | No | Disable JavaScript execution on the page | |
| ignore_https_errors | No | Ignore HTTPS/TLS certificate errors |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully adds that the tool 'Returns base64-encoded images' (critical for response handling) and specifies the 'Maximum 5 URLs' constraint. However, it omits error handling behavior (what if one URL fails?), timeout characteristics, and resource intensity given this is a batch operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Exactly three sentences with zero waste: action definition, output format, and constraint limit. Front-loaded with the core purpose, making it easy to scan and comprehend instantly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 13 parameters with complete schema coverage, the description appropriately focuses on tool purpose, output format (base64), and scope limits rather than repeating parameter details. Missing output schema is partially mitigated by the explicit return value description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description does not add parameter-specific semantics (e.g., explaining 'wait_until' strategies or geolocation behavior), but the high schema coverage means the description doesn't need to compensate for undocumented parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Capture' with resource 'screenshots' and clearly identifies the scope as 'multiple URLs'. The phrase 'batch' in the name combined with 'multiple URLs' in the description effectively distinguishes it from sibling tool 'take_screenshot' (implied single) and from 'generate_pdf'/'render_html' (different output types).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies batch usage through 'multiple URLs' and states the constraint 'Maximum 5 URLs', but does not explicitly compare against sibling alternatives or state when to prefer this over repeated calls to 'take_screenshot'. Usage is implied rather than explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_pdfAInspect
Convert a webpage URL or raw HTML to a PDF document. Supports page size, orientation, margins, and background printing. Returns PDF as base64.
| Name | Required | Description | Default |
|---|---|---|---|
| css | No | Custom CSS to inject before PDF generation | |
| url | No | URL to convert to PDF (provide url OR html, not both) | |
| html | No | Raw HTML to convert to PDF (provide html OR url, not both) | |
| locale | No | BCP 47 locale tag for the browser context (e.g., "en-US") | |
| cookies | No | Cookies to set before capture (e.g., [{"name": "session", "value": "abc123"}]) | |
| headers | No | Custom HTTP headers to send with the request (e.g., {"Authorization": "Bearer token"}) | |
| evaluate | No | JavaScript code to execute on the page before PDF generation (max 10KB). Runs in the browser context. | |
| latitude | No | Geolocation latitude (-90 to 90) | |
| timezone | No | IANA timezone for the browser context (e.g., "America/New_York") | |
| block_ads | No | Block ads and trackers | |
| dark_mode | No | Enable dark mode emulation | |
| landscape | No | Landscape orientation (default: false) | |
| longitude | No | Geolocation longitude (-180 to 180) | |
| pdf_scale | No | Content scale 0.1-2 (default: 1) | |
| pdf_format | No | Page size (default: A4) | |
| wait_until | No | Page load strategy (default: domcontentloaded). Use "networkidle" for SPAs. | |
| print_background | No | Include background colors/images (default: true) | |
| disable_javascript | No | Disable JavaScript execution on the page | |
| ignore_https_errors | No | Ignore HTTPS/TLS certificate errors |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses the critical return format ('Returns PDF as base64'). However, it omits behavioral details like execution time expectations, browser context implications, or error handling for invalid URLs despite having 19 complex parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with no filler, front-loaded with the primary action. Deducted one point for the inaccurate mention of 'margins' which creates slight confusion given no such parameter exists in the schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 19-parameter tool with no output schema, as it compensates by specifying the return format. However, given the complexity (cookies, auth headers, JavaScript evaluation), it should mention security implications or resource intensity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds value by grouping related capabilities ('page size, orientation...'), though it inaccurately mentions 'margins' which does not appear to be a parameter in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Convert') and resources ('webpage URL or raw HTML to a PDF document'), distinguishing it from screenshot siblings. It identifies the core transformation accurately.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the output format (PDF) implicitly distinguishes it from screenshot siblings, the description provides no explicit guidance on when to choose PDF generation over screenshots or how to handle the url/html mutual exclusivity (though this is covered in the schema).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
render_htmlAInspect
Render raw HTML/CSS to an image. Perfect for generating OG images, social cards, email previews, and dynamic content from templates. No URL needed.
| Name | Required | Description | Default |
|---|---|---|---|
| css | No | Additional CSS to inject | |
| html | Yes | Raw HTML content to render (max 2MB) | |
| width | No | Viewport width (default: 1280) | |
| format | No | Image format (default: png) | |
| height | No | Viewport height (default: 720) | |
| locale | No | BCP 47 locale tag for the browser context (e.g., "en-US") | |
| quality | No | Image quality 1-100 (default: 80) | |
| evaluate | No | JavaScript code to execute on the page before capture (max 10KB). Runs in the browser context. | |
| selector | No | CSS selector to capture specific element | |
| timezone | No | IANA timezone for the browser context (e.g., "America/New_York") | |
| dark_mode | No | Enable dark mode emulation | |
| full_page | No | Capture full page scroll height | |
| wait_until | No | Content load strategy (default: domcontentloaded). Use "networkidle" for content with external resources. | |
| device_scale | No | Device scale factor (default: 1) | |
| disable_javascript | No | Disable JavaScript execution on the page | |
| ignore_https_errors | No | Ignore HTTPS/TLS certificate errors |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It establishes that raw HTML is processed locally without a URL, but fails to mention critical behavioral traits evident from the schema: JavaScript execution capabilities (the 'evaluate' parameter), headless browser context, external resource loading behavior, or the format of the returned image data (binary vs base64).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficiently structured sentences. The first states core functionality immediately; the second provides use cases and the key differentiator ('No URL needed'). There is no redundant or filler text—every word serves to clarify scope or distinguish from alternatives.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (16 parameters including JavaScript execution, timezone localization, and viewport controls) and lack of output schema, the description is minimally adequate. It covers the primary use case but omits important safety context regarding arbitrary code execution via the 'evaluate' parameter and does not indicate the return value format (image bytes).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds conceptual context by mentioning 'templates' (relating to the html parameter) and 'CSS' (relating to the css parameter), but does not provide additional semantic guidance on complex parameter interactions like how 'evaluate' interacts with 'disable_javascript', or viewport sizing recommendations beyond the schema's defaults.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Render'), clear input ('raw HTML/CSS'), and output format ('to an image'). The 'No URL needed' phrase effectively distinguishes it from sibling screenshot tools that likely require URLs, while listing specific use cases (OG images, social cards, email previews) clarifies the target resource types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear contextual signals for when to use the tool (generating OG images, social cards, dynamic templates) and implies the key constraint (no URL needed) which differentiates it from take_screenshot. However, it lacks explicit 'when not to use' guidance or named alternatives among the siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
take_screenshotBInspect
Take a screenshot of any webpage. Returns a base64-encoded image.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to screenshot | |
| delay | No | Delay in ms before capture | |
| width | No | Viewport width (default: 1280) | |
| clip_x | No | Clip region X offset in pixels | |
| clip_y | No | Clip region Y offset in pixels | |
| format | No | Image format (default: png) | |
| height | No | Viewport height (default: 720) | |
| locale | No | BCP 47 locale tag for the browser context (e.g., "en-US", "de-DE", "ja-JP") | |
| cookies | No | Cookies to set before capture (e.g., [{"name": "session", "value": "abc123"}]) | |
| headers | No | Custom HTTP headers to send with the request (e.g., {"Authorization": "Bearer token"}) | |
| quality | No | Image quality 1-100 (default: 80) | |
| evaluate | No | JavaScript code to execute on the page before capture (max 10KB). Runs in the browser context via page.evaluate(). Use to dismiss modals, click buttons, or set UI state. | |
| latitude | No | Geolocation latitude (-90 to 90). Must be used with longitude. | |
| selector | No | CSS selector to capture specific element | |
| timezone | No | IANA timezone for the browser context (e.g., "America/New_York", "Europe/Berlin", "Asia/Tokyo") | |
| dark_mode | No | Enable dark mode emulation | |
| full_page | No | Capture full page scroll height | |
| longitude | No | Geolocation longitude (-180 to 180). Must be used with latitude. | |
| clip_width | No | Clip region width in pixels | |
| wait_until | No | Page load strategy: "commit" (fastest, response received), "domcontentloaded" (default, DOM ready), "load" (all resources loaded), "networkidle" (no network activity for 500ms — best for SPAs and lazy-loaded content) | |
| clip_height | No | Clip region height in pixels | |
| hide_banners | No | Dismiss cookie banners, consent popups, and overlays before capture | |
| resize_width | No | Resize output image to this width (maintains aspect ratio) | |
| resize_height | No | Resize output image to this height (maintains aspect ratio) | |
| disable_javascript | No | Disable JavaScript execution on the page. Useful for security research, capturing static HTML, or avoiding JS-driven redirects. | |
| ignore_https_errors | No | Ignore HTTPS/TLS certificate errors (useful for dev/staging environments with self-signed certs) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully discloses the return format (base64-encoded image), which is critical given the lack of an output schema. However, it omits operational details like browser engine used, timeout behaviors, rate limiting, or whether the operation has side effects (network egress).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. It is appropriately front-loaded with the core action and captures the essential return value information without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 26 parameters with complex capabilities (JavaScript evaluation, geolocation, authentication cookies, clipping), the description is minimally viable. It discloses the return format (mitigating the lack of output schema), but fails to mention browser context, concurrent limitations, or image size constraints that would help an agent invoke this rich tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds minimal parameter context beyond the schema, only implying the URL parameter via 'any webpage.' It does not clarify relationships between parameters (e.g., latitude/longitude dependency) or provide usage examples for complex parameters like 'evaluate' or 'cookies.'
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the specific action (take a screenshot) and target (any webpage), and mentions the return format. While it implies single-page capture (distinguishing from 'batch_screenshot' by name), it does not explicitly differentiate when to use this vs. the batch alternative or vs. generate_pdf for documentation purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus sibling alternatives like batch_screenshot (for multiple URLs) or generate_pdf (for document output). It also omits prerequisites like the URL being publicly accessible or handling authentication.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!