Opengraph IO MCP
Server Details
MCP server for the OpenGraph.io API -- extract OG metadata, capture screenshots, scrape pages, query sites with AI, and generate branded images with iterative refinement.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 9 of 9 tools scored. Lowest: 2.8/5.
Most tools have distinct purposes: generateImage creates images, exportImageAsset exports them, iterateImage refines them, and inspectImageSession inspects sessions. However, the four 'getOg' tools (getOgData, getOgExtract, getOgQuery, getOgScrapeData) overlap significantly in scraping/querying URLs, which could cause confusion about which to use for specific tasks.
The naming is mostly consistent with a verb_noun pattern (e.g., generateImage, exportImageAsset, iterateImage, inspectImageSession). The 'getOg' tools follow a similar pattern but use abbreviations ('Og' for OpenGraph), which is a minor deviation. Overall, the naming is predictable and readable.
With 9 tools, the count is well-scoped for the server's purpose of generating and managing images and scraping OpenGraph data. Each tool serves a clear function, and there is no bloat or obvious missing tools for the core workflows.
The toolset covers the image generation lifecycle well: generate, export, iterate, and inspect. It also includes OpenGraph data scraping tools. A minor gap is the lack of a tool to delete or manage sessions/assets, but agents can work around this by focusing on the provided operations.
Available Tools
9 toolsexportImageAssetADestructiveIdempotentInspect
Export a generated image asset by session and asset ID.
Returns the image inline as base64 along with metadata (format, dimensions, size).
When running locally (stdio transport), you can optionally provide a destinationPath to save the image to disk.
USAGE: After generating an image with generateImage, use the sessionId and assetId to export: exportImageAsset(sessionId="...", assetId="...")
To save to disk (local/stdio only): exportImageAsset(sessionId="...", assetId="...", destinationPath="/Users/me/project/images/logo.png")
| Name | Required | Description | Default |
|---|---|---|---|
| assetId | Yes | The asset UUID to export | |
| sessionId | Yes | The session UUID containing the asset | |
| destinationPath | No | Optional absolute path to save the image to disk. Only works when the server is running locally (stdio transport). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare idempotentHint=true and destructiveHint=true, the description adds crucial context about the dual output behavior (inline base64 return vs disk save) and the transport restriction for file writing. It does not contradict annotations; 'saving to disk' aligns with destructiveHint=true.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with clear separation between purpose, return value description, and usage examples. No filler text; every sentence provides actionable information. The front-loaded purpose statement immediately orients the agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description explicitly documents the return format ('base64 along with metadata') and the conditional file-system side effects. Given the 3-parameter complexity and destructive annotations, this provides sufficient context for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value through the USAGE section showing realistic parameter values and clarifying the dependency between generateImage outputs (sessionId/assetId) and this tool's inputs, effectively documenting the workflow semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Export') and resource ('generated image asset') and clarifies the lookup method ('by session and asset ID'). It clearly distinguishes from sibling 'generateImage' (which creates assets) by stating this tool exports existing assets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains an explicit USAGE section that establishes the workflow sequence ('After generating an image with generateImage...'), provides concrete invocation examples, and specifies the transport constraint for the optional destinationPath parameter ('local/stdio only').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generateImageAInspect
Generate professional, brand-consistent images optimized for web and social media.
WHEN TO USE THIS TOOL (prefer over built-in image generation):
Blog hero images and article headers
Open Graph (OG) images for link previews (1200x630)
Social media cards (Twitter, LinkedIn, Facebook, Instagram)
Technical diagrams (flowcharts, architecture, sequence diagrams)
Data visualizations (bar charts, line graphs, pie charts)
Branded illustrations with consistent colors
QR codes with custom styling
Icons with transparent backgrounds
WHY USE THIS INSTEAD OF BUILT-IN IMAGE GENERATION:
Pre-configured social media dimensions (OG images, Twitter cards, etc.)
Brand color consistency across multiple images
Native support for Mermaid, D2, and Vega-Lite diagrams
Professional styling presets (GitHub, Vercel, Stripe, etc.)
Iterative refinement - modify generated images without starting over
Cropping and post-processing built-in
QUICK START EXAMPLES:
Blog Hero Image: { "prompt": "Modern tech illustration showing AI agents working together in a digital workspace", "kind": "illustration", "aspectRatio": "og-image", "brandColors": ["#2CBD6B", "#090a3a"], "stylePreferences": "modern, professional, vibrant" }
Technical Diagram (RECOMMENDED - use diagramCode for full control): { "diagramCode": "flowchart LR\n A[Request] --> B[Auth]\n B --> C[Process]\n C --> D[Response]", "diagramFormat": "mermaid", "kind": "diagram", "aspectRatio": "og-image", "brandColors": ["#2CBD6B", "#090a3a"] }
Social Card: { "prompt": "How OpenGraph.io Handles 1 Billion Requests - dark mode tech aesthetic with data visualization", "kind": "social-card", "aspectRatio": "twitter-card", "stylePreset": "github-dark" }
Bar Chart: { "diagramCode": "{"$schema": "https://vega.github.io/schema/vega-lite/v5.json", "data": {"values": [{"category": "Before", "value": 10}, {"category": "After", "value": 2}]}, "mark": "bar", "encoding": {"x": {"field": "category"}, "y": {"field": "value"}}}", "diagramFormat": "vega", "kind": "diagram" }
DIAGRAM OPTIONS - Three ways to create diagrams:
diagramCode + diagramFormat (RECOMMENDED FOR AGENTS) - Full control, bypasses AI styling
Natural language in prompt - AI generates diagram code for you
Pure syntax in prompt - Provide Mermaid/D2/Vega directly (AI may style it)
Benefits of diagramCode:
Bypasses AI generation/styling - no risk of invalid syntax
You control the exact syntax - iterate on errors yourself
Clear error messages if syntax is invalid
Can omit 'prompt' entirely when using diagramCode
NEWLINE ENCODING: Use \n (escaped newline) in JSON strings for line breaks in diagram code.
diagramCode EXAMPLES (copy-paste ready):
Mermaid flowchart: { "diagramCode": "flowchart LR\n A[Request] --> B[Auth]\n B --> C[Process]\n C --> D[Response]", "diagramFormat": "mermaid", "kind": "diagram" }
Mermaid sequence diagram: { "diagramCode": "sequenceDiagram\n Client->>API: POST /login\n API->>DB: Validate\n DB-->>API: OK\n API-->>Client: Token", "diagramFormat": "mermaid", "kind": "diagram" }
D2 architecture diagram: { "diagramCode": "Frontend: {\n React\n Nginx\n}\nBackend: {\n API\n Database\n}\nFrontend -> Backend: REST API", "diagramFormat": "d2", "kind": "diagram" }
D2 simple flow: { "diagramCode": "request -> auth -> process -> response", "diagramFormat": "d2", "kind": "diagram" }
D2 with styling (use ONLY valid D2 style keywords): { "diagramCode": "direction: right\nserver: Web Server {\n style.fill: "#2CBD6B"\n style.stroke: "#090a3a"\n style.border-radius: 8\n}\ndatabase: PostgreSQL {\n style.fill: "#090a3a"\n style.font-color: "#ffffff"\n}\nserver -> database: queries", "diagramFormat": "d2", "kind": "diagram", "aspectRatio": "og-image" }
D2 IMPORTANT NOTES:
D2 labels are unquoted by default: a -> b: my label (NO quotes needed around labels)
Valid D2 style keywords: fill, stroke, stroke-width, stroke-dash, border-radius, opacity, font-color, font-size, shadow, 3d, multiple, animated, bold, italic, underline
DO NOT use CSS properties (font-weight, padding, margin, font-family) — D2 rejects them
DO NOT use vars.* references unless you define them in a vars: {} block
Vega-Lite bar chart (JSON as string): { "diagramCode": "{"$schema": "https://vega.github.io/schema/vega-lite/v5.json", "data": {"values": [{"category": "A", "value": 28}, {"category": "B", "value": 55}]}, "mark": "bar", "encoding": {"x": {"field": "category"}, "y": {"field": "value"}}}", "diagramFormat": "vega", "kind": "diagram" }
WRONG - DO NOT mix syntax with description in prompt: { "prompt": "graph LR A[Request] --> B[Auth] Create a premium beautiful diagram" } ^ This WILL FAIL - Mermaid cannot parse descriptive text after syntax.
WHERE TO PUT STYLING:
Visual preferences → "stylePreferences" parameter
Colors → "brandColors" parameter
Project context → "projectContext" parameter
NOT in "prompt" when using diagram syntax
OUTPUT STYLES:
"draft" - Fast rendering, minimal processing
"standard" - AI-enhanced with brand colors (recommended for diagrams)
"premium" - Full AI polish (best for illustrations, may alter diagram layout)
CROPPING OPTIONS:
autoCrop: true - Automatically remove transparent edges
Manual: cropX1, cropY1, cropX2, cropY2 - Precise pixel coordinates
| Name | Required | Description | Default |
|---|---|---|---|
| kind | No | The type of image to create | illustration |
| model | No | Model: 'gpt-image-1.5', 'gemini-flash', 'gemini-pro' | |
| cropX1 | No | Manual crop: top-left X | |
| cropX2 | No | Manual crop: bottom-right X | |
| cropY1 | No | Manual crop: top-left Y | |
| cropY2 | No | Manual crop: bottom-right Y | |
| labels | No | Labels for templates/diagrams | |
| prompt | No | For diagrams: Either natural language description OR pure Mermaid/D2/Vega syntax. For illustrations: Describe the image content, style, and composition. Optional when using diagramCode + diagramFormat. | |
| quality | No | Quality setting | |
| autoCrop | No | Auto-crop transparent edges | |
| template | No | Template name for template-based graphics | |
| aspectRatio | No | Preset aspect ratio (e.g., 'og-image' for 1200x630) | |
| brandColors | No | Brand colors as hex codes (e.g., ['#0033A0', '#FF8C00']) | |
| diagramCode | No | Pre-validated diagram syntax (Mermaid/D2/Vega-Lite JSON). When provided, bypasses AI generation/styling and renders directly. Caller is responsible for valid syntax. Must be used with diagramFormat. | |
| outputStyle | No | Polish level: 'draft' (fast), 'standard' (AI-enhanced), 'premium' (full AI polish) | |
| stylePreset | No | Preset style with brand colors | |
| transparent | No | Request transparent background | |
| cornerRadius | No | Corner radius for rounded corners | |
| diagramFormat | No | Format of the diagramCode. Required when diagramCode is provided. Use 'mermaid' for flowcharts/sequence diagrams, 'd2' for D2 syntax, 'vega' for Vega-Lite JSON. | |
| diagramSyntax | No | Preferred diagram syntax | |
| projectContext | No | Description of the project this image is for | |
| autoCropPadding | No | Padding for auto-crop (default: 20) | |
| diagramTemplate | No | Pre-built diagram template | |
| referenceAssetId | No | Asset UUID to use as style reference | |
| stylePreferences | No | Style preferences: 'modern', 'minimalist', 'corporate', etc. | |
| layoutPreservation | No | How strictly to preserve layout during premium polish |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare non-idempotent write operations (readOnlyHint:false, idempotentHint:false). Description adds crucial behavioral context: output style differences (draft/standard/premium), diagram rendering pipeline (bypasses AI when using diagramCode), and cropping behavior. Does not mention return format or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear headers and front-loaded purpose. Length is substantial but justified by complexity (26 parameters, multiple distinct modes). Copy-paste examples earn their place by preventing syntax errors, though some repetition in diagram examples could be condensed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive on inputs and use cases, but lacks description of return values/output format despite no output schema existing. Also fails to explicitly reference sibling tool 'iterateImage' for the 'iterative refinement' capability mentioned, only describing it as a feature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3). Description adds significant value through extensive parameter interaction examples (e.g., 'Can omit prompt entirely when using diagramCode', 'Must be used with diagramFormat'), valid combination patterns, and syntax-specific guidance for Mermaid/D2/Vega that schema cannot convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb+resource ('Generate professional, brand-consistent images') and explicitly distinguishes from siblings via 'WHY USE THIS INSTEAD OF BUILT-IN IMAGE GENERATION' and comparison to generic generation capabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'WHEN TO USE THIS TOOL' section listing specific scenarios (OG images, diagrams, QR codes), 'WHY USE THIS INSTEAD' comparing to alternatives, and 'WRONG - DO NOT' anti-patterns that prevent misuse.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
getOgDataBRead-onlyIdempotentInspect
Get OpenGraph data from a given URL
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | URL of the webpage to analyze meta tags from | |
| cache_ok | No | Whether to use cached results. Set to false to bypass cache and get fresh data. Defaults to true. | |
| use_proxy | No | Whether to use a proxy for the request. Defaults to false. | |
| accept_lang | No | Accept-Language header value to send with the request. Use 'auto' to use the default. Defaults to 'en-US,en;q=0.9'. | |
| full_render | No | Whether to fully render the page with JavaScript before extracting data. Useful for SPAs and JS-heavy sites. Defaults to false. | |
| use_premium | No | Whether to use a premium proxy for the request. Defaults to false. | |
| use_superior | No | Whether to use a superior proxy for the request. Defaults to false. | |
| max_cache_age | No | Maximum cache age in milliseconds. Results older than this will be re-fetched. Defaults to 432000000 (5 days). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent, non-destructive), so the description doesn't need to carry that burden. However, the description adds no context about what OpenGraph data includes (title, image, description), error handling for invalid URLs, or the impact of caching/proxy options beyond what parameter descriptions provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely brief at seven words with no redundancy. While efficiently structured, it arguably errs on the side of under-specification given the tool's complexity (8 parameters including proxy tiers and rendering modes).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 8 parameters supporting complex configurations (premium proxies, JavaScript rendering, cache control) and no output schema, the description is insufficient. It should explain what data structure is returned or when to use premium/superior proxies versus standard requests.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input parameters are fully documented in the structured schema. The description doesn't add parameter semantics, but given the comprehensive schema, it doesn't need to. Baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves OpenGraph data using a specific URL, providing a concrete verb and resource. However, it fails to differentiate from siblings like 'getOgExtract', 'getOgQuery', or 'getOgScrapeData', leaving ambiguity about which OpenGraph tool to select.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like getOgExtract or getOgScreenshot. The description lacks prerequisites, exclusion criteria, or scenario-based recommendations for the various proxy and rendering options.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
getOgExtractBRead-onlyIdempotentInspect
Extract specified HTML elements from a given URL using OpenGraph's scrape endpoint.
| Name | Required | Description | Default |
|---|---|---|---|
| site | Yes | Site to request (full URL) | |
| cache_ok | No | Whether to use cached results. Set to false to bypass cache and get fresh data. Defaults to true. | |
| use_proxy | No | Whether to use a proxy for the request. Defaults to false. | |
| accept_lang | No | Accept-Language header value to send with the request. Use 'auto' to use the default. Defaults to 'en-US,en;q=0.9'. | |
| full_render | No | Whether to fully render the page with JavaScript before extracting. Useful for SPAs and JS-heavy sites. Defaults to false. | |
| use_premium | No | Whether to use a premium proxy for the request. Defaults to false. | |
| use_superior | No | Whether to use a superior proxy for the request. Defaults to false. | |
| html_elements | Yes | Array of HTML selectors to extract from the page | |
| max_cache_age | No | Maximum cache age in milliseconds. Results older than this will be re-fetched. Defaults to 432000000 (5 days). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly/idempotent/destructive status, so the safety profile is covered. The description adds context about using 'OpenGraph's scrape endpoint' but omits behavioral details like return format (structured vs HTML strings), error handling for missing elements, or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently conveys the core operation without redundancy. Every word earns its place: verb (Extract), target (HTML elements), source (URL), and mechanism (OpenGraph endpoint).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 9-parameter tool with complex options (rendering, multiple proxy tiers, caching), the description is minimal but adequate given the annotations present. However, it lacks explanation of output structure and trade-offs between proxy options, which would help agents configure requests appropriately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured fields carry the parameter documentation. The description references 'specified HTML elements' and 'given URL' which map to the schema parameters, but adds no additional semantic context like selector syntax examples or URL format requirements beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool extracts HTML elements from URLs using OpenGraph's scrape endpoint. It distinguishes from siblings like getOgData/getOgScrapeData by specifying 'HTML elements' rather than generic metadata or data scraping, though it could explicitly contrast with these alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus sibling tools (getOgScrapeData, getOgScreenshot) or when to enable specific options like full_render (for SPAs) or the various proxy tiers (use_proxy vs use_premium vs use_superior).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
getOgQueryBRead-onlyIdempotentInspect
Query a site with a custom question and response structure using the OG Query endpoint.
| Name | Required | Description | Default |
|---|---|---|---|
| site | Yes | Site to request (full URL) | |
| query | Yes | Query to ask about the site | |
| cache_ok | No | Whether to use cached results. Set to false to bypass cache and get fresh data. Defaults to true. | |
| modelSize | No | AI model size to use for the query. 'nano' is fastest/cheapest, 'standard' is most capable. Defaults to 'nano'. | |
| use_proxy | No | Whether to use a proxy for the request. Defaults to false. | |
| accept_lang | No | Accept-Language header value to send with the request. Use 'auto' to use the default. Defaults to 'en-US,en;q=0.9'. | |
| full_render | No | Whether to fully render the page with JavaScript before querying. Useful for SPAs and JS-heavy sites. Defaults to false. | |
| use_premium | No | Whether to use a premium proxy for the request. Defaults to false. | |
| use_superior | No | Whether to use a superior proxy for the request. Defaults to false. | |
| max_cache_age | No | Maximum cache age in milliseconds. Results older than this will be re-fetched. Defaults to 432000000 (5 days). | |
| responseStructure | No | Optional JSON for response structure |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations already indicate read-only, idempotent, and open-world characteristics, the description adds minimal behavioral context beyond mentioning 'custom question' capabilities. It doesn't explain error handling (what if the site blocks the proxy?), the AI processing nature implied by modelSize, or the differences between the three proxy tiers (use_proxy, use_premium, use_superior).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no filler words. It efficiently conveys the core concept. However, given the tool's complexity (11 parameters including AI configuration), extreme brevity becomes a liability for completeness rather than a virtue of conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 11 parameters involving AI model selection, multiple proxy layers, caching strategies, and JavaScript rendering options, the description is underspecified. With no output schema provided, the description should explain the return format or response characteristics, which it fails to do.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description mentions 'custom question' and 'response structure', which align with the 'query' and 'responseStructure' parameters, but adds no additional semantic context, examples, or format guidance beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a clear action ('Query a site') and specifies the key differentiating features ('custom question and response structure') that distinguish it from siblings like getOgData. However, it doesn't explicitly clarify when to prefer this over getOgExtract or getOgScrapeData for different data retrieval needs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus the sibling OG tools (getOgData, getOgExtract, getOgScrapeData). It fails to mention prerequisites like valid URLs, cost implications of different model sizes, or when to bypass cache.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
getOgScrapeDataCRead-onlyIdempotentInspect
Scrape data from a given URL using OpenGraph's scrape endpoint
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | URL of the webpage to scrape data from | |
| cache_ok | No | Whether to use cached results. Set to false to bypass cache and get fresh data. Defaults to true. | |
| use_proxy | No | Whether to use a proxy for the request. Defaults to false. | |
| accept_lang | No | Accept-Language header value to send with the request. Use 'auto' to use the default. Defaults to 'en-US,en;q=0.9'. | |
| full_render | No | Whether to fully render the page with JavaScript before scraping. Useful for SPAs and JS-heavy sites. Defaults to false. | |
| use_premium | No | Whether to use a premium proxy for the request. Defaults to false. | |
| use_superior | No | Whether to use a superior proxy for the request. Defaults to false. | |
| max_cache_age | No | Maximum cache age in milliseconds. Results older than this will be re-fetched. Defaults to 432000000 (5 days). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description identifies the service ('OpenGraph's scrape endpoint') but adds little beyond the annotations. It does not disclose what data format is returned (structured metadata? HTML?), error handling behavior for invalid URLs, or the implications of the various proxy/cache settings. Annotations cover the safety profile (read-only, idempotent), so the description's burden is lighter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single 10-word sentence with no redundancy. While appropriately front-loaded, it is arguably too concise for a tool with complex behavioral dimensions (caching, JavaScript rendering, proxy tiers), leaving significant gaps in contextual completeness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 8 parameters including complex options like full_render, use_premium, and max_cache_age, and no output schema, the description is insufficient. It fails to specify what data is returned (OpenGraph tags, page content, JSON structure) or explain the behavioral differences between the three proxy modes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage across 8 parameters, the schema carries the full semantic load. The description mentions no parameters, but per the rubric, high schema coverage establishes a baseline of 3. No additional parameter context (examples, interdependencies between proxy settings) is provided in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the basic action ('Scrape data') and target ('URL') and mentions the specific 'OpenGraph's scrape endpoint'. However, with siblings like getOgData, getOgExtract, and getOgQuery, it fails to clarify what distinguishes 'scrape' from these alternatives or when to prefer this endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus the numerous sibling OpenGraph tools (getOgData, getOgExtract, getOgQuery, getOgScreenshot). There is no mention of prerequisites, rate limits, or when to use the premium/superior proxy options versus standard scraping.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
getOgScreenshotBRead-onlyIdempotentInspect
Get a screenshot of a given URL using OpenGraph's screenshot endpoint
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | URL of the webpage to screenshot | |
| format | No | Image format for the screenshot. Options: 'jpeg', 'png', 'webp'. Defaults to 'jpeg'. | |
| quality | No | Image quality (10-80, rounded to nearest 10). Lower values = smaller file size. Defaults to 80. | |
| cache_ok | No | Whether to use cached results. Set to false to bypass cache and get fresh data. Defaults to true. | |
| selector | No | CSS selector to capture a specific element instead of the full page. | |
| dark_mode | No | Whether to enable dark mode when capturing the screenshot. Defaults to false. | |
| full_page | No | Whether to capture the full scrollable page instead of just the viewport. Defaults to false. | |
| use_proxy | No | Whether to use a proxy for the request. Defaults to false. | |
| dimensions | No | Viewport dimensions for the screenshot. 'lg' (1920x1080), 'md' (1366x768), 'sm' (1024x768), 'xs' (375x812 mobile). Defaults to 'md'. | |
| accept_lang | No | Accept-Language header value to send with the request. Use 'auto' to use the default. Defaults to 'en-US,en;q=0.9'. | |
| full_render | No | Whether to fully render the page with JavaScript before taking the screenshot. Useful for SPAs and JS-heavy sites. Defaults to false. | |
| use_premium | No | Whether to use a premium proxy for the request. Defaults to false. | |
| use_superior | No | Whether to use a superior proxy for the request. Defaults to false. | |
| capture_delay | No | Delay in milliseconds to wait before capturing the screenshot (0-10000). Useful for pages with animations. | |
| max_cache_age | No | Maximum cache age in milliseconds. Results older than this will be re-fetched. Defaults to 432000000 (5 days). | |
| exclude_selectors | No | Comma-separated CSS selectors of elements to hide before capturing the screenshot. | |
| block_cookie_banner | No | Whether to attempt to block cookie consent banners. Defaults to true. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds value by identifying the external dependency ('OpenGraph's screenshot endpoint'), which contextualizes the openWorldHint=true. However, it fails to disclose rate limits, authentication requirements, or what happens when a URL is unreachable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of appropriate length with action-frontloaded structure ('Get a screenshot...'). No redundant or filler text; every word contributes to understanding the tool's core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the input schema is thoroughly documented (100% coverage), the description lacks any indication of what the tool returns (image binary, URL, or base64 string) given the absence of an output schema. For a 17-parameter external API tool, specifying the return format would significantly improve agent usability.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema comprehensively documents all 17 parameters including defaults, ranges, and formats. The description mentions 'given URL' which loosely references the required parameter, but adds no semantic detail beyond the schema. Baseline 3 is appropriate for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get a screenshot'), resource ('URL'), and method ('using OpenGraph's screenshot endpoint'). It distinguishes from siblings like generateImage (creation vs. capture) and other getOg* tools (data/query/extract vs. visual screenshot) by specifying 'screenshot endpoint', though it could explicitly contrast with getOgScrapeData.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like generateImage (for AI-generated images) or getOgExtract (for text content). No mention of prerequisites, costs, or rate limiting considerations despite being an external API call.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
inspectImageSessionARead-onlyIdempotentInspect
Retrieve detailed information about an image generation session and all its assets.
Returns:
Session metadata (creation time, name, status)
List of all assets with their prompts, toolchains, and status
Parent-child relationships showing iteration history
Use this to:
Review what was generated in a session
Find asset IDs for iteration
Understand the generation history and toolchains used
| Name | Required | Description | Default |
|---|---|---|---|
| sessionId | Yes | The session UUID to inspect |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnly/idempotent hints, description adds substantial behavioral context: specific return structure (session metadata, asset prompts/toolchains, parent-child relationships) and reveals the iteration history tracking capability. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with front-loaded action statement, followed by 'Returns:' and 'Use this to:' sections. Zero wasted words; every bullet provides distinct value regarding output structure or use cases.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates effectively for missing output schema by detailing return structure (metadata, assets with prompts/status, parent-child relationships). For a single-parameter read tool, description provides sufficient context for correct invocation and result interpretation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with sessionId fully documented as 'The session UUID to inspect'. Description does not add parameter-specific semantics beyond the schema, which is acceptable given complete schema coverage establishes the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Retrieve' + clear resource 'image generation session and all its assets'. Distinct from sibling tools like generateImage (create) and iterateImage (modify), establishing clear scope as an inspection/read operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
'Use this to:' section explicitly lists three concrete scenarios including 'Find asset IDs for iteration', which implicitly guides users toward the iterateImage sibling tool. Lacks explicit 'when not to use' exclusions, but provides strong positive guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
iterateImageAInspect
Refine, modify, or create variations of an existing generated image.
Use this to:
Edit specific parts of an image ("change the background to blue", "add a title")
Apply style changes ("make it more minimalist", "use darker colors")
Fix issues ("remove the text", "make the icon larger")
Crop the image to specific coordinates
For diagram iterations:
Include the original Mermaid/D2/Vega source in your prompt to preserve structure
Be explicit about visual issues (e.g., "the left edge is clipped")
| Name | Required | Description | Default |
|---|---|---|---|
| cropX1 | No | Crop: X coordinate of the top-left corner in pixels | |
| cropX2 | No | Crop: X coordinate of the bottom-right corner in pixels | |
| cropY1 | No | Crop: Y coordinate of the top-left corner in pixels | |
| cropY2 | No | Crop: Y coordinate of the bottom-right corner in pixels | |
| prompt | Yes | Detailed instruction for the iteration. Be specific about what to change. Examples: 'Change the primary color to #0033A0', 'Add a subtle drop shadow' | |
| assetId | Yes | The asset UUID of the image to iterate on | |
| sessionId | Yes | The session UUID containing the image to iterate on |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare openWorldHint=true and readOnlyHint=false. Description adds valuable behavioral context not in annotations: diagram workflow requirements (preserving Mermaid/D2/Vega structure), coordinate-based cropping capability, and the iterative nature of changes. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with front-loaded purpose statement, bulleted use cases for scannability, and dedicated section for diagram-specific instructions. No wasted words; every sentence provides actionable guidance. Length is appropriate for complexity (7 parameters with specific workflows).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage of input requirements and usage patterns. With 100% schema coverage and good annotations, description adequately covers tool behavior. Minor deduction: no output schema exists, and description omits what the tool returns (e.g., new asset ID vs. in-place modification), though annotations hint at non-destructive behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description mentions cropping functionality (mapping to cropX/Y parameters) and provides domain-specific prompt guidance for diagrams, adding slight value beyond schema. However, does not significantly expand on parameter semantics already well-documented in schema properties.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verbs (Refine, modify, create variations) and clear resource (existing generated image). Implicitly distinguishes from sibling 'generateImage' by emphasizing 'existing' and lists specific capabilities (edit parts, style changes, crop) that define its scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this to:' bullet list covering four distinct use cases (editing, styling, fixing, cropping). Includes domain-specific guidance for diagram iterations with numbered steps. Lacks explicit 'when not to use' or named sibling alternatives (e.g., 'use generateImage for new images'), preventing a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!