Shopify
Server Details
Shopify MCP Pack — wraps the Shopify Admin REST API (2024-01)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-shopify
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 10 of 10 tools scored. Lowest: 2.6/5.
The Shopify tools (shopify_get_order, shopify_list_orders, etc.) are clearly distinct, but the Pipeworx tools (ask_pipeworx, discover_tools, remember, recall, forget) overlap in purpose. ask_pipeworx claims to pick the right tool and fill arguments, which subsumes discover_tools and makes their boundaries ambiguous. Additionally, ask_pipeworx's description suggests it can answer questions from 'best available data source,' but it's unclear if it uses the Shopify tools or not, creating potential confusion.
The server mixes three naming styles: verb_noun (shopify_get_order, shopify_list_products) with a 'shopify_' prefix, and a separate set of Pipeworx tools that use imperative verbs (ask_pipeworx, discover_tools, remember, recall, forget) without any prefix or consistent pattern. This inconsistency makes it hard for an agent to predict tool names.
With 10 tools, the count is reasonable. The server covers both general memory/question-answering capabilities and specific Shopify operations. However, the presence of both ask_pipeworx and discover_tools suggests some redundancy, but overall the count is appropriate for a server that integrates a broader AI assistant with a specific e-commerce platform.
The Shopify operations are limited to read-only: get product, list products, get order, list orders, list customers. There are no create, update, or delete operations for Shopify resources, which is a notable gap for e-commerce management. The Pipeworx tools provide general memory and tool discovery, but their coverage is vague. Completeness is mediocre.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description fully discloses that Pipeworx selects the best tool and fills arguments, returning the result. Since no annotations are provided, the description carries the full burden and does so excellently by explaining the internal delegation behavior and expected outcome.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (three sentences) and front-loaded with the core purpose. The examples are helpful but could be trimmed to one or two. Overall, it is well-structured and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with one parameter and no output schema, the description adequately explains input and expected behavior. It does not detail what happens if the question is ambiguous or unsupported, but given the simplicity, it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds context that the 'question' parameter should be in plain English and gives examples, but the schema already describes it as 'Your question or request in natural language'. With 100% schema coverage, the description adds minimal additional value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it takes a natural language question and returns an answer by selecting the appropriate tool and filling arguments. It provides concrete examples like 'What is the US trade deficit with China?', making the purpose unambiguous and distinct from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use it: when you want to ask a question in plain English without browsing tools or learning schemas. It implies you should use this instead of other tools when you don't know which specific tool to use. However, it does not explicitly state when not to use it (e.g., when you need direct access to a specific tool).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry behavioral transparency. It describes the tool as searching and returning tools, but does not disclose details like whether it's read-only, whether it triggers side effects, or any rate limits. However, the description is clear enough for a search tool, earning a 3.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise: two sentences that efficiently convey purpose, usage context, and return value. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is a search/discovery tool, the description is complete enough. It explains when to use it and what it returns. However, it lacks details on how results are ordered (e.g., relevance) and whether it supports pagination beyond the limit parameter. The absence of an output schema means the description could have mentioned the format of returned tools, but it's still adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with descriptions for both parameters. The description adds context by explaining the 'query' parameter should be a natural language description, providing examples, which goes beyond the schema. The 'limit' parameter is adequately described in the schema. Thus, a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches the tool catalog by describing needs and returns relevant tools with names and descriptions. It specifies a concrete verb ('search') and resource ('tool catalog'), and distinguishes itself from siblings by being a meta-tool for tool discovery.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly instructs to call this tool FIRST when there are 500+ tools available, providing a clear usage context. It also indicates the return type (names and descriptions) which helps in deciding when to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It states the tool deletes a memory, which implies mutability. However, it doesn't disclose whether the operation is irreversible, what happens if the key doesn't exist, or any side effects. This is adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no unnecessary words. It is front-loaded with the action and object.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 required param, no output schema, no annotations), the description is minimally complete. It states what it does and how. However, it lacks error handling behavior and success confirmation details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, and the description adds no extra meaning beyond the schema. The description's 'by key' matches the schema's 'key' parameter. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the verb 'Delete' and the resource 'stored memory', and specifies the action is 'by key'. It clearly distinguishes itself from siblings like 'remember' (create) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage (when you want to delete a specific memory by its key) but does not provide explicit guidance on when not to use it or alternatives (e.g., if you want to delete all memories, or if the key doesn't exist).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It discloses that omitting key lists all memories, and implies retrieval is read-only. However, it does not mention side effects (e.g., whether retrieval marks memory as accessed) or persistence details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two clear, front-loaded sentences. First sentence states purpose and usage pattern; second sentence adds context. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 optional param, no output schema), the description is sufficient. It explains both modes of use. A small gap: does not specify the format of the returned memory (e.g., plain text, JSON), but with no output schema, some ambiguity is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% and the parameter 'key' is described. The description adds value by explaining the behavior when key is omitted (list all), which is not in the schema. No extra details on format are needed as key is a simple string.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a memory by key or lists all memories when key is omitted. It specifies the resource ('stored memory') and action ('retrieve'), and distinguishes from sibling tools like 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use (to retrieve context saved earlier) and how to use (omit key to list all). It does not mention alternatives, but the sibling list shows 'remember' and 'forget' cover other operations, making this tool's role clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses persistence behavior: 'Authenticated users get persistent memory; anonymous sessions last 24 hours', which is critical for an agent to understand memory lifetime.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with no wasted words. Each sentence adds value: what it does, when to use, and behavioral details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool (2 params, no output schema), the description is nearly complete. It could optionally mention return value (e.g., success message) but not required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds no extra meaning beyond what the schema already provides (key and value fields are well-documented in schema).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Store a key-value pair in your session memory', which is a specific verb+resource pair. It differentiates from siblings like 'recall' and 'forget' by its store operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: 'Use this to save intermediate findings, user preferences, or context across tool calls'. However, it does not explicitly say when not to use it or compare to alternatives like 'recall'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shopify_get_orderCInspect
Get a single order by ID from a Shopify store.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Order ID | |
| _shop | Yes | Shop domain (e.g., mystore.myshopify.com) | |
| _apiKey | Yes | Shopify Admin API access token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It does not disclose any side effects, rate limits, permissions, or error conditions. Simply stating 'Get a single order' provides minimal transparency beyond the name.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no fluff. It is appropriately concise for a simple retrieval operation, though could benefit from additional context if needed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description is insufficient. It does not explain what data the order contains, any field options, or behavior when order is not found.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds no extra meaning beyond what the schema already provides (id, _shop, _apiKey).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get'), resource ('order'), and qualifier ('single by ID') with the source ('from a Shopify store'). It is distinct from siblings like 'shopify_list_orders' which retrieves multiple orders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives (e.g., for batch retrieval use shopify_list_orders). No prerequisites or context for using the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shopify_get_productBInspect
Get a single product by ID from a Shopify store.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Product ID | |
| _shop | Yes | Shop domain (e.g., mystore.myshopify.com) | |
| _apiKey | Yes | Shopify Admin API access token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It describes the basic action (get) and mentions the source (Shopify store), but doesn't disclose side effects (none expected), rate limits, authentication details beyond parameters, or error cases. Acceptable for a simple read operation, but could be improved.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the action and resource. No wasted words. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (3 parameters, no output schema, no nested objects), the description is fairly complete. However, it doesn't mention the return format (e.g., product object with fields) or potential errors (e.g., not found). For a simple read tool, this is acceptable but could be slightly more informative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (all three parameters have descriptions in the schema). The description does not add any additional meaning beyond the schema. Baseline 3 is appropriate since the schema already documents parameters adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Get a single product by ID from a Shopify store,' which clearly identifies the verb (Get), resource (product), and scope (by ID from a Shopify store). It distinguishes from sibling tools like shopify_list_products (list vs. single) and shopify_get_order (product vs. order). However, it lacks explicit mention that it only retrieves one product, which is implied but not explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. For example, it doesn't mention that shopify_list_products should be used to get multiple products or for searching. It also doesn't mention prerequisites like having a valid API key or shop domain, though those are in the input schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shopify_list_customersCInspect
List customers from a Shopify store.
| Name | Required | Description | Default |
|---|---|---|---|
| _shop | Yes | Shop domain (e.g., mystore.myshopify.com) | |
| limit | No | Number of customers to return (max 250, default 50) | |
| _apiKey | Yes | Shopify Admin API access token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavior. It does not mention pagination, rate limits, authentication requirements beyond schema fields, or any side effects. The tool lists customers but does not clarify if it returns all or filtered results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, concise and to the point. It is front-loaded with the action and resource, but lacks any additional details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters, no output schema, and no annotations, the description is too minimal. It does not explain return values, filtering capabilities, or pagination, leaving agents without critical usage information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds no additional meaning beyond what the schema provides. Parameters are documented in the schema but the description does not explain them.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists customers from a Shopify store. However, it does not distinguish this tool from siblings like shopify_list_orders or shopify_list_products, and the verb 'list' is generic.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like shopify_get_order or shopify_list_orders. No mention of prerequisites or context for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shopify_list_ordersBInspect
List orders from a Shopify store, optionally filtered by status.
| Name | Required | Description | Default |
|---|---|---|---|
| _shop | Yes | Shop domain (e.g., mystore.myshopify.com) | |
| limit | No | Number of orders to return (max 250, default 50) | |
| status | No | Filter by status: open, closed, cancelled, any (default: open) | |
| _apiKey | Yes | Shopify Admin API access token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must disclose behavior. It indicates a read operation (list) and mentions optional filtering, but lacks details on pagination, ordering, or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One short sentence, no fluff. Could be slightly more informative without adding length, but it's concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has 4 params, no output schema, no annotations. The description covers the basic purpose but lacks details on pagination, rate limits, or return structure. Adequate for a simple list operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so each parameter is documented in the schema. The description adds the default status and limit, but the schema already covers that. No additional semantics beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists orders from a Shopify store and can optionally filter by status. However, it does not differentiate from sibling tools like shopify_get_order, which is a different operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like shopify_get_order for single orders, or when the status filter is appropriate. No mention of prerequisites or context for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shopify_list_productsBInspect
List products from a Shopify store. Returns up to 50 products by default.
| Name | Required | Description | Default |
|---|---|---|---|
| _shop | Yes | Shop domain (e.g., mystore.myshopify.com) | |
| limit | No | Number of products to return (max 250, default 50) | |
| _apiKey | Yes | Shopify Admin API access token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions default return limit (50) and max (250), which adds behavioral context beyond the schema. However, it does not disclose pagination behavior, potential rate limits, or whether results are ordered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no waste. Front-loaded with purpose, then key constraint (up to 50). Could be slightly more structured but very concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 required parameters, no output schema, and no annotations, the description provides minimal completeness. It covers purpose and limit default but lacks behavioral details like pagination, ordering, or what data is returned per product. A 3 is adequate for a simple list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description adds no additional meaning beyond schema; it mentions the default limit but does not explain the _shop or _apiKey parameters beyond what schema says.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'List products from a Shopify store', which is a specific verb and resource. It distinguishes from siblings like shopify_get_product (single product) and shopify_list_orders (orders). However, it does not explicitly differentiate from other list tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for listing products but provides no guidance on when to use this vs alternatives like shopify_get_product for single product details. No exclusion criteria or prerequisites beyond the required parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!