ecommerceoracle
Server Details
EcommerceOracle - 8 e-commerce tools: Shopify, WooCommerce, GMV, conversions, returns.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- ToolOracle/ecommerceoracle
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.2/5 across 8 of 8 tools scored.
Each tool targets a distinct function: barcode lookup, category trends, health check, market monitoring, price comparison, product detail extraction, product search, and Shopify fetching. No overlapping purposes; descriptions clearly differentiate inputs and outputs.
All tool names follow a consistent snake_case pattern with exactly two words (e.g., barcode_lookup, product_search). No mixing of styles or irregularities.
8 tools fall well within the ideal 3-15 range for a focused ecommerce data server. Each tool serves a clear purpose without redundancy.
Covers major product retrieval needs: search, detail, barcode lookup, price comparison, trends, and marketplace monitoring. Lacks features like review extraction or bulk operations, but core workflows are supported.
Available Tools
8 toolsbarcode_lookupBInspect
Look up product by EAN/UPC barcode. Returns full product details, prices from multiple merchants.
| Name | Required | Description | Default |
|---|---|---|---|
| ean | No | EAN code (alternative parameter name) | |
| barcode | No | EAN or UPC barcode number e.g. '4056565032219' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must cover behavioral traits. It states what the tool returns but omits side effects, authentication needs, rate limits, error handling, or read-only nature. For a simple read tool, more disclosure is expected.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose, and every word adds value. No redundancy or unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description mentions 'full product details, prices from multiple merchants', which gives a reasonable sense of output. However, it lacks details on barcode format limitations, error behavior, or response structure. Adequate for a simple tool but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides 100% coverage with clear descriptions for both parameters (ean and barcode). The tool-level description adds little beyond confirming the barcode purpose, so the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool does barcode lookup by EAN/UPC and returns product details with prices from multiple merchants. This distinguishes it from sibling tools like product_search (text-based) and product_detail (likely by ID).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool versus alternatives like product_search or product_detail. There is no mention of prerequisites, exclusions, or best practices.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
category_trendsCInspect
Trending products and news in a specific category.
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | Language: 'de' or 'en' (default: de) | de |
| category | No | Product category e.g. 'Matratzen', 'Laptops', 'Smartphones' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description bears full responsibility. It only states the output type (trending products and news) but omits details like read-only nature, pagination, error handling, or response structure, which are critical for agent behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that communicates the core purpose. It could be slightly more structured (e.g., listing examples), but it is not verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description should explain what the tool returns (e.g., list of items with details like name, price, source). It only mentions 'trending products and news', which is vague. The tool is underdescribed for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Both parameters have schema descriptions covering 100% of the schema. The tool description adds no additional meaning beyond what the schema provides, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves trending products and news in a specific category. The verb 'Trending' implies an action, and the resource is defined. However, it does not explicitly differentiate from sibling tools like product_search or price_comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. No context about use cases, prerequisites, or limitations is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
health_checkBInspect
EcommerceOracle server status.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries the full burden. It only says 'server status' without specifying if it's read-only, what it returns, or any side effects. Minimal behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise phrase with no wasted words. It efficiently conveys the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero annotations and no output schema, the description is too sparse. It fails to detail what 'status' means (e.g., response format, error behavior), leaving the agent underinformed for a health-check tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%. The description does not need to explain parameters. Baseline 4 applies, and the description adds some context about the tool's focus.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'EcommerceOracle server status' clearly indicates the tool checks server health, aligning with the name 'health_check'. It distinguishes from sibling tools (data retrieval) by being a status/ping operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No usage guidance is provided. The description does not mention when to use this tool (e.g., before other operations, to verify connectivity) or contrast it with alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
marketplace_monitorCInspect
Monitor marketplace presence and pricing news for your brand vs competitor.
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | Language: 'de' or 'en' (default: de) | de |
| brand | No | Your brand name | |
| competitor | No | Competitor brand (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It only states the high-level function without details on how monitoring works, data freshness, or side effects. This is insufficient for a tool with no additional annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence that front-loads the purpose. It is concise but lacks structure; still, it earns a 4 for being succinct.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description should provide details about return format or behavior, which it does not. The tool is under-specified for a monitoring function.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema adequately documents all three parameters. The description does not add value beyond the schema, meeting the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it monitors marketplace presence and pricing news for brand vs competitor, using a specific verb and resource. However, it does not differentiate from sibling tools like price_comparison, leading to a score of 4.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. The description lacks any indication of context, prerequisites, or exclusions needed for proper selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
price_comparisonAInspect
Compare product prices across multiple sources. Provides merchant links for Idealo, Amazon, Google Shopping.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Product to compare e.g. 'Emma Matratze 90x200' | |
| country | No | Country: 'DE', 'US', 'GB' (default: DE) | DE |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behaviors. It mentions providing merchant links but does not specify if raw prices are returned, if results are real-time or cached, or how unsupported countries are handled.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences, front-loaded with purpose, and contains no unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so the description should explain return format. It mentions 'merchant links' but does not detail the structure. Given sibling tools, it could clarify boundaries, e.g., for price vs. detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers both parameters with descriptions (100% coverage). The description adds value by naming specific sources (Idealo, Amazon, Google Shopping), giving context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Compare' and resource 'product prices across multiple sources', listing specific merchants (Idealo, Amazon, Google Shopping), distinguishing it from siblings like product_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for price comparison but does not explicitly state when to use this tool versus alternatives like product_search or marketplace_monitor, nor does it mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
product_detailAInspect
Extract structured product data (price, rating, images) from any product page URL using JSON-LD.
| Name | Required | Description | Default |
|---|---|---|---|
| url | No | Product page URL e.g. 'https://shop.com/product/123' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description only mentions using JSON-LD but does not disclose what happens if JSON-LD is missing, authentication needs, rate limits, or return format. The reliance on JSON-LD is a critical behavioral detail missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One concise sentence without waste, clearly stating the action and scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without output schema or annotations, the description lacks information on return format, error handling, and limitations (e.g., only works with JSON-LD). This is insufficient for an extraction tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'url'. The description adds a concrete example URL, providing context beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool extracts structured product data (price, rating, images) from a product page URL using JSON-LD, distinguishing it from siblings like product_search or barcode_lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for extracting data from a specific product page URL but lacks explicit guidance on when to use this versus alternatives (e.g., when you need data from a single page vs. searching for products).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
product_searchAInspect
Search products by name across Algolia, UPCItemDB, and Open Food Facts. Returns prices, brands, images.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results 1-20 (default: 10) | |
| query | No | Product search query e.g. 'matratze 140x200', 'iPhone 15 case' | |
| source | No | Data source: 'algolia', 'upc', 'openfood', 'all' (default: algolia) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility for behavioral disclosure. While it states the tool searches and returns data, it does not mention whether the operations are read-only, any rate limits, error handling (e.g., if a source is unavailable), or the structure of the returned data. This leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loaded with the action and resources, and contains no extraneous information. Every word serves a purpose, making it efficient for an AI agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description mentions return types (prices, brands, images) but does not detail the structure or fields of the response. With no output schema and three parameters, more context would be beneficial, such as pagination behavior or how sources are selected. It is adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with each parameter described adequately within the schema. The description does not add extra context beyond what the schema provides (e.g., no additional explanation of source differences or query format). Baseline is 3 due to high schema coverage, and no value is added.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches products by name across three specific sources and returns prices, brands, and images. It is distinct from siblings like barcode_lookup (which requires a barcode) and product_detail (which provides details for a specific product). The verb 'search' and the resource 'products by name' are specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (when searching by product name) but does not provide explicit guidance on alternatives or when not to use. For example, it does not mention that for barcode-based lookups one should use barcode_lookup. The usage context is clear but exclusions are missing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shopify_productsCInspect
Fetch products from any Shopify-powered store via their public /products.json endpoint.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max products 1-50 (default: 20) | |
| domain | No | Shop domain e.g. 'gymshark.com', 'allbirds.com' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It only states the basic action without disclosing behavior like rate limits, error handling, or that it is a read-only operation. The description adds little value beyond the name.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, direct sentence with no wasted words. Efficient but lacks richness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple parameters and good schema descriptions, the tool is minimally adequate but missing return format discussion (e.g., JSON output) and pagination details. Provides enough for a simple fetch but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions for both parameters. Description adds no extra meaning beyond the schema, achieving baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it fetches products from Shopify stores via a public endpoint. It distinguishes itself from siblings like product_detail and product_search by focusing on the general product listing from any store, but lacks explicit differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like barcode_lookup or product_search. No mention of prerequisites, limitations, or use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!