Skip to main content
Glama

Server Details

Clean product data from any URL. Schema.org + AI extraction. 200 free calls/month.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
laundromatic/shopgraph
GitHub Stars
2
Server Listing
ShopGraph

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation3/5

The tools have overlapping purposes that could cause confusion. enrich_basic and enrich_product both extract product data from URLs, with enrich_product being more comprehensive, which might lead to misselection when a simpler extraction is needed. However, enrich_html and score_product are distinct in their inputs and outputs, providing some clarity.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (e.g., enrich_basic, enrich_html, enrich_product, score_product). The naming is predictable and readable, with no deviations in style or convention, making it easy for agents to understand the tool set.

Tool Count5/5

With 4 tools, the count is well-scoped for the server's purpose of product data extraction and analysis. Each tool serves a distinct function, and there are no unnecessary additions, making the set manageable and focused.

Completeness4/5

The tool set covers key aspects of product data extraction, including basic and comprehensive URL-based extraction, HTML input handling, and scoring for agent readiness. A minor gap is the lack of update or delete operations, but this is reasonable given the domain focus on extraction and analysis rather than management.

Available Tools

4 tools
enrich_basicA
Read-only
Inspect

Extract basic product attributes from a URL (name, price, brand, availability). Faster and cheaper than enrich_product. 50 free calls/month — no payment needed. Paid: $0.01 per call after free tier.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesProduct page URL to extract data from
formatNoOutput format. "ucp" returns UCP line_item format. Default: "default".default
force_refreshNoBypass cache entirely. Always triggers live extraction. Costs 3x credits.
include_scoreNoInclude agent-readiness score in response.
payment_method_idNoStripe payment method ID for MPP payment
minimum_confidenceNoAuto-refresh if any cached field's DECAYED confidence falls below this threshold. Costs 2x credits when refresh triggers, 0.25x on cache hit.
strict_confidence_thresholdNoFields below this confidence will be nulled with explanation. Default: off.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds cost structure (200 free calls/month, $0.01/call after) and performance characteristics ('faster') not in annotations. Lists specific extracted fields. No contradiction with readOnlyHint=true (extraction is read operation).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose front-loaded, comparison follows, pricing concludes. Every sentence earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lists return fields (compensating for no output schema), covers pricing model, distinguishes from siblings, and acknowledges both parameters. Minor gap: doesn't explicitly link payment_method_id parameter to the paid tier described.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for both url and payment_method_id. Description implies URL usage in first sentence and payment context in pricing section, but does not add semantic detail beyond schema definitions. Baseline 3 appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Extract) + resource (basic product attributes) + scope (name, price, brand, availability from URL). Distinguishes from sibling enrich_product by implying limited attribute set vs full enrichment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly compares to sibling enrich_product ('Faster and cheaper'), implying when to choose this tool (cost/speed sensitive basic extraction). Missing explicit 'when not to use' but comparison provides clear guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

enrich_htmlA
Read-only
Inspect

Extract product data from raw HTML you already have (no HTTP fetch needed). Ideal when using Bright Data, Firecrawl, or any scraping API — pipe the HTML through ShopGraph for structured product data. Uses schema.org + LLM fallback. Costs $0.02 per call (cached results are free).

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesOriginal URL of the page (used for context and caching)
htmlYesRaw HTML content of the product page
formatNoOutput format. "ucp" returns UCP line_item format. Default: "default".default
force_refreshNoBypass cache entirely. Always triggers live extraction. Costs 3x credits.
include_scoreNoInclude agent-readiness score in response.
payment_method_idNoStripe payment method ID for MPP payment
minimum_confidenceNoAuto-refresh if any cached field's DECAYED confidence falls below this threshold.
strict_confidence_thresholdNoFields below this confidence will be nulled with explanation. Default: off.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true (safe to call). The description adds critical behavioral context not in structured data: cost model ('$0.02 per call'), caching policy ('cached results are free'), and implementation methodology ('schema.org + LLM fallback'). Does not mention rate limits or error modes, preventing a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first sentence front-loads the core function and constraint; the second covers integration context, implementation details, and pricing. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately signals the return value ('structured product data') and covers operational aspects like cost and caching. Could be improved by hinting at the specific structure of the returned product data or error handling patterns, but sufficiently complete for an extraction tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents all three parameters (html, url, payment_method_id). The description references 'raw HTML' and implies URL usage for caching, but does not add substantial semantic depth beyond what the schema provides. Baseline 3 is appropriate given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Extract') and resource ('product data') and immediately clarifies the input modality ('raw HTML you already have'). The parenthetical '(no HTTP fetch needed)' effectively distinguishes this from sibling tools like enrich_product that likely perform fetching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit positive guidance ('Ideal when using Bright Data, Firecrawl, or any scraping API') and implicitly defines the negative case via '(no HTTP fetch needed)'. While it doesn't explicitly name siblings as alternatives, the context clearly signals when to use this specific enrichment path versus fetch-based alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

enrich_productA
Read-only
Inspect

Extract comprehensive product data from a URL including name, price, brand, images, availability, and more. Uses schema.org structured data when available, with LLM fallback. Costs $0.02 per call (cached results are free).

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesProduct page URL to extract data from
formatNoOutput format. "ucp" returns UCP line_item format. Default: "default".default
force_refreshNoBypass cache entirely. Always triggers live extraction. Costs 3x credits.
include_scoreNoInclude agent-readiness score in response.
payment_method_idNoStripe payment method ID for MPP payment
minimum_confidenceNoAuto-refresh if any cached field's DECAYED confidence falls below this threshold. Costs 2x credits when refresh triggers, 0.25x on cache hit.
strict_confidence_thresholdNoFields below this confidence will be nulled with explanation. Default: off.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical cost information ($0.02 per call, cached free) and implementation mechanism (schema.org structured data with LLM fallback) beyond the annotations. Aligns with readOnlyHint=true (extraction implies reading) and openWorldHint=true (URL fetching). Does not disclose rate limits or error behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences: core functionality first, technical implementation second, pricing third. Zero redundancy—every sentence conveys unique information not found in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description enumerates expected return fields (name, price, brand, etc.) and discloses cost mechanism. Adequate for a 2-parameter extraction tool, though error conditions or return structure details would elevate this to 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description mentions URL extraction generally but does not add semantic nuance to specific parameters like explaining when payment_method_id is required or how it relates to the stated cost. Relies on schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'Extract' with resource 'product data' and comprehensive scope (name, price, brand, images, availability). The 'comprehensive' descriptor and detailed field list distinguish it from sibling 'enrich_basic', clearly positioning this as the full-featured option.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage through the 'comprehensive' label and rich field enumeration, suggesting when to use this over alternatives. However, it lacks explicit guidance like 'use enrich_basic for simple extraction' or 'use this when you need pricing data'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

score_productA
Read-only
Inspect

Extract product data and return agent-readiness score (0-100). Scores structured data completeness, semantic richness, UCP compatibility, pricing clarity, and inventory signals. Full scoring breakdown included.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesProduct page URL to extract and score
formatNoOutput format. "ucp" returns UCP line_item format. Default: "default".default
payment_method_idNoStripe payment method ID for MPP payment
strict_confidence_thresholdNoFields below this confidence will be nulled with explanation. Default: off.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only safety and external URL access; description adds valuable behavioral context by detailing the five scoring dimensions and confirming 'Full scoring breakdown included' in output. Does not explain payment_method_id integration behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly-constructed sentences with zero redundancy. Front-loaded with primary action (extract and score), followed by evaluation criteria, and output characteristics.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates well for missing output schema by specifying return type (score 0-100) and detailed breakdown inclusion. Addresses core complexity (4 params, open-world access) but could clarify payment integration point.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description mentions 'UCP compatibility' which semantically anchors the 'ucp' format option, but does not elucidate the unusual payment_method_id parameter or strict_confidence_threshold behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb-resource combination ('Extract product data and return agent-readiness score') combined with distinct scope (scoring 5 specific dimensions). The scoring focus clearly differentiates it from enrichment siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context through specific scoring criteria (agent-readiness, UCP compatibility), but lacks explicit guidance on when to prefer this over enrich_product or enrich_basic for data extraction tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.