mcp-server
Server Details
Monetize and manage your Tip4Serv store directly from your LLM.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
19 toolscreate_discount_couponCInspect
Create a store coupon
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | Coupon code used by customers. | |
| type | Yes | Type of discount. | |
| limit | No | Maximum number of uses allowed for this coupon. | |
| value | Yes | Discount amount (percentage or fixed value depending on `type`). | |
| maximum | No | Maximum purchase amount in store currency allowed to use the coupon. | |
| minimum | No | Minimum purchase amount in store currency required to use the coupon. | |
| expiration | No | Expiration timestamp (store timezone). | |
| accepted_products | No | List of product IDs eligible for the coupon. Empty array means all products. | |
| accepted_categories | No | List of category IDs eligible for the coupon. Empty array means all categories. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. 'Create a store coupon' implies a write operation, but it doesn't specify permissions required, whether creation is idempotent, what happens on duplicate codes, or any side effects. For a mutation tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It's appropriately sized and front-loaded, immediately conveying the core purpose without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a mutation tool with 9 parameters, no annotations, and no output schema, the description is incomplete. It doesn't address behavioral aspects (permissions, idempotency), usage context, or what the tool returns. The agent lacks critical information to use this tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 9 parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema (e.g., it doesn't explain relationships between parameters like 'type' and 'value'). Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Create a store coupon' clearly states the action (create) and resource (store coupon), which is specific and unambiguous. However, it doesn't differentiate from sibling tools like 'create_discount_giftcard' or 'update_discount_coupon', which would require mentioning this creates new coupons rather than modifying existing ones or creating gift cards.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing admin permissions), when to choose this over 'update_discount_coupon', or any constraints like rate limits. This leaves the agent without context for proper tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_discount_giftcardCInspect
Create a store gift card
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Name of the gift card. | |
| currency | Yes | Currency of the gift card. | |
| initial_credit | Yes | Initial credit value of the gift card in its currency. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states 'Create' which implies a write operation, but doesn't mention permissions required, whether the creation is irreversible, rate limits, or what happens on success/failure. For a tool that creates financial instruments, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that gets straight to the point with zero wasted words. It's appropriately sized for a tool with clear purpose and good schema documentation, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool that creates financial gift cards with no annotations and no output schema, the description is insufficient. It doesn't explain what happens after creation (e.g., does it return a gift card ID? activation code?), error conditions, or business logic constraints (e.g., minimum/maximum credit values). The combination of mutation operation and financial context demands more completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no parameter information beyond what's already in the schema, which has 100% coverage with clear descriptions for all three required parameters (name, initial_credit, currency). Since the schema does the heavy lifting, the baseline score of 3 is appropriate, though the description doesn't enhance understanding of parameter relationships or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Create a store gift card' clearly states the action (create) and resource (store gift card), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'create_discount_coupon' or 'update_discount_giftcard', which would require specifying this creates new gift cards rather than modifying existing ones or creating coupons.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing store access or authentication, or differentiate from similar tools like 'create_discount_coupon' or 'update_discount_giftcard'. This leaves the agent without context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_categoriesBRead-onlyInspect
Get store categories (paginated)
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page number to retrieve. | |
| parent | No | Optional parent category ID to filter child categories. | |
| max_page | No | The maximum number of elements per page (5–50). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as read-only, non-destructive, and open-world, covering the safety profile. The description adds 'paginated' which is valuable behavioral context not in annotations, indicating the response will be paginated rather than returning all categories at once. However, it doesn't describe response format, error conditions, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at just four words. Every element earns its place: 'Get' (action), 'store categories' (resource), and '(paginated)' (key behavioral trait). No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only list tool with good annotations and full schema coverage, the description is minimally adequate. However, without an output schema, the description doesn't explain what the response contains (category objects with what fields) or pagination details (how to interpret page numbers, total counts). The 'paginated' mention helps but doesn't provide complete context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all three parameters well-documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema. The baseline of 3 is appropriate when the schema provides complete parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('store categories'), making the purpose understandable. It distinguishes from siblings like 'get_products' by specifying categories rather than products, but doesn't explicitly differentiate from other category-related tools (none exist in siblings). The addition of '(paginated)' is helpful context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives. While siblings include various 'get_' tools for different resources (products, customers, payments), the description doesn't mention when to retrieve categories specifically or whether this is the primary way to access category data. The pagination mention implies large datasets but doesn't provide usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_customersBRead-onlyInspect
Get customers (paginated, sortable, filterable by date range)
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page number to retrieve. | |
| sort | No | Sorting criteria for customers. Currently supported: `revenue`. | |
| max_page | No | The maximum number of elements per page (5–50). | |
| date_filter | No | Time filtering for customers, using timestamps in store's timezone. Format: "[start_timestamp,end_timestamp]" Example: "[1759333917,1761234717]". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, and destructiveHint=false, so the agent knows this is a safe, non-destructive read operation with open-world semantics. The description adds context about pagination, sorting, and filtering, which are useful behavioral traits not covered by annotations. However, it doesn't disclose rate limits, authentication needs, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and key capabilities without any wasted words. It's appropriately sized for the tool's complexity and structured to convey essential information quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, no output schema), the description covers the main behavioral aspects (pagination, sorting, filtering). With annotations providing safety and openness context, and schema covering parameters fully, the description is reasonably complete. However, it lacks details on return format or error handling, which could be helpful for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters well-documented in the schema. The description mentions pagination, sorting, and date range filtering, which aligns with the parameters but doesn't add significant meaning beyond what the schema provides. The baseline score of 3 is appropriate given the high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('customers'), making the purpose evident. It also mentions key capabilities (paginated, sortable, filterable by date range), which helps distinguish it from other list operations. However, it doesn't explicitly differentiate from sibling tools like 'get_products' or 'get_servers' beyond the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, when not to use it, or compare it to sibling tools like 'get_payments' or 'get_subscriptions' for related data needs. Usage is implied by the resource name only.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_discount_couponsBRead-onlyInspect
Get store coupons (paginated)
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page number to retrieve. | |
| max_page | No | The maximum number of elements per page (5–50). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, so the agent knows this is a safe, read-only operation with potentially large result sets. The description adds 'paginated' which provides important behavioral context about how results are returned, but doesn't mention rate limits, authentication requirements, or what specific coupon data is returned. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at just four words plus parentheses, with zero wasted language. It's front-loaded with the core purpose ('Get store coupons') and includes the critical behavioral detail ('paginated') efficiently. Every element earns its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with good annotations (readOnlyHint, openWorldHint) and full schema coverage, the description provides the minimum viable information. However, without an output schema, the description doesn't explain what coupon data is returned (e.g., codes, values, expiration dates) or the pagination structure. The combination of annotations and description gives adequate but incomplete context for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, both parameters (page and max_page) are fully documented in the schema with descriptions, types, defaults, and constraints. The description adds no additional parameter semantics beyond implying pagination exists, which is already evident from the parameter names. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get store coupons (paginated)' clearly states the action (get) and resource (store coupons), with the parenthetical '(paginated)' providing important scope information. It distinguishes from siblings like create_discount_coupon and update_discount_coupon by focusing on retrieval rather than modification. However, it doesn't explicitly differentiate from get_discount_giftcards which retrieves a different resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There's no mention of prerequisites, when this tool is appropriate versus other retrieval tools like get_products or get_customers, or any context about what 'store coupons' encompasses. The agent must infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_discount_giftcardsBRead-onlyInspect
Get store gift cards (paginated)
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page number to retrieve. | |
| max_page | No | The maximum number of elements per page (5–50). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only, non-destructive, and open-world hints, but the description adds valuable context: it explicitly states the tool is paginated, which isn't indicated in annotations. This clarifies behavioral traits like result chunking and the need for iterative calls, enhancing transparency beyond the structured data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—a single, front-loaded sentence with zero wasted words. It efficiently conveys the core purpose and a key behavioral trait (paginated), making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (read-only list operation), rich annotations, and full schema coverage, the description is minimally adequate. However, it lacks output details (no schema provided) and doesn't clarify scope or filtering, leaving gaps in completeness for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents both parameters (page and max_page with defaults and constraints). The description adds no additional meaning about parameters, so it meets the baseline of 3 without compensating for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action ('Get') and resource ('store gift cards'), but it's vague about scope and lacks sibling differentiation. It doesn't specify whether this retrieves all gift cards, active ones, or a filtered subset, and doesn't distinguish it from tools like 'get_discount_coupons' or 'update_discount_giftcard'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context (e.g., for viewing gift cards vs. coupons), or exclusions, leaving the agent to infer usage from the name alone among many sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_paymentCRead-onlyInspect
Get payment
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The payment ID to retrieve. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, and destructiveHint=false, covering safety and data scope. The description does not contradict these, and while it adds no behavioral details beyond annotations, the annotations themselves are comprehensive. No additional context like rate limits or auth needs is provided, but the annotations sufficiently inform the agent's expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two words, 'Get payment', which directly states the tool's purpose without any fluff. It is front-loaded and wastes no space, though this brevity contributes to gaps in other dimensions like purpose clarity and usage guidelines.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter, high schema coverage, no output schema) and rich annotations, the description is minimally adequate. However, it lacks details on return values or error handling, which are not covered by annotations or schema, leaving some contextual gaps for the agent despite the straightforward nature of the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'id' fully documented in the schema as 'The payment ID to retrieve.' The description adds no parameter information beyond what the schema provides, so it meets the baseline score of 3 where the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get payment' is a tautology that restates the tool name without adding meaningful context. It specifies the verb 'get' and resource 'payment', but lacks any distinguishing details about scope, format, or differentiation from sibling tools like 'get_payments' (plural). This minimal statement provides only basic intent without specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description does not mention prerequisites, context, or exclusions, nor does it differentiate from the sibling 'get_payments' tool. Without any usage instructions, the agent must infer appropriate scenarios solely from the tool name and schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_paymentsARead-onlyInspect
Get payments (paginated, optionally filtered by user identifier)
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page number to retrieve. | |
| max_page | No | The maximum number of elements per page (5–50). | |
| identifier | No | User identifier to filter payments. Accepted identifiers: email, username, eosid, ue4_id, steam_username, steam_id, minecraft_uuid, discord_id, discord_username, fivem_xPlayer_id, fivem_citizen_id, minecraft_username. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, and destructiveHint=false, covering safety and scope. The description adds pagination behavior and filtering capability, which provides useful context beyond annotations. However, it doesn't describe response format, error conditions, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality ('Get payments') and immediately adds key behavioral details (paginated, optionally filtered). Every word earns its place with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only list tool with good annotations and full schema coverage, the description adequately covers core behavior. However, without an output schema, it doesn't describe what the payments data looks like (structure, fields), leaving a gap for the agent to understand return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters well-documented in the schema. The description mentions pagination and optional user identifier filtering, which aligns with the schema but doesn't add significant semantic value beyond what's already in parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('payments'), and specifies pagination and optional filtering by user identifier. It distinguishes from sibling 'get_payment' (singular) by implying multiple payments, but doesn't explicitly contrast with other list tools like 'get_products' or 'get_customers'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving payments with optional user filtering, but provides no explicit guidance on when to use this versus alternatives like 'get_payment' (singular) or other list tools. It mentions filtering capability but doesn't specify scenarios or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_product_by_idCRead-onlyInspect
Get product by ID
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The ID of the product to retrieve (path parameter). | |
| details | No | If true, returns detailed product information. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and open-world behavior, so the description adds no behavioral context beyond that. It doesn't disclose any additional traits like rate limits, authentication needs, or what 'Get' entails operationally. However, it doesn't contradict the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with a single sentence, 'Get product by ID', which is front-loaded and wastes no words. It efficiently conveys the core purpose without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and minimal description, this is incomplete for a retrieval tool. It doesn't explain what information is returned, how errors are handled, or any dependencies. With annotations covering safety but no output details, the description should provide more context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents both parameters (id and details). The description adds no meaning beyond what the schema provides, such as explaining the 'details' parameter's impact or the ID format. Baseline score of 3 is appropriate as the schema carries the burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get product by ID' clearly states the verb ('Get') and resource ('product'), but it's overly generic and doesn't differentiate from sibling tools like 'get_products' or 'get_server_by_id'. It lacks specificity about what kind of product or what retrieval entails.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'get_products' or other 'get_*' tools. The description doesn't mention prerequisites, context, or exclusions, leaving the agent to infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_productsARead-onlyInspect
Get store products (paginated, with optional filters)
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page number to retrieve. | |
| details | No | If true, returns detailed product information. | |
| max_page | No | The maximum number of elements per page (5–50). | |
| only_enabled | No | If true, returns only enabled products. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and open-world behavior. The description adds value by specifying pagination and optional filters, which are behavioral traits not covered by annotations. However, it doesn't detail rate limits, auth needs, or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded in a single sentence, with no wasted words. Every part ('Get store products', 'paginated', 'with optional filters') contributes essential information efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, no output schema), the description covers basic purpose and behavior but lacks details on output format, error handling, or integration with siblings. Annotations help, but more context would improve completeness for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents all parameters. The description mentions 'optional filters' but doesn't add specific meaning beyond what the schema provides, aligning with the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('store products'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_product_by_id' or 'get_categories', which would require more specificity for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance with 'paginated, with optional filters', but offers no explicit advice on when to use this tool versus alternatives like 'get_product_by_id' or 'get_categories'. No context on prerequisites or exclusions is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_server_by_idCRead-onlyInspect
Get server by ID
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The ID of the server to retrieve. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, and destructiveHint=false, indicating this is a safe, read-only operation that may not return all possible data. The description adds no behavioral context beyond this—it doesn't mention authentication needs, rate limits, error conditions, or what happens if the ID doesn't exist. With annotations covering the core safety profile, the description adds minimal value but doesn't contradict them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at three words, with zero wasted language. It's front-loaded with the core action ('Get'), though this brevity comes at the cost of completeness. Every word earns its place by directly stating the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema and the tool's role in a system with multiple server-related tools (e.g., 'get_servers', 'get_server_commands'), the description is incomplete. It doesn't explain what a 'server' represents in this context, what data is returned, or how this tool fits into the broader workflow. The annotations help but don't compensate for these gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'id' documented as 'The ID of the server to retrieve.' The description adds no additional meaning about the parameter, such as ID format, valid ranges, or examples. Baseline 3 is appropriate since the schema fully covers parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get server by ID' is a tautology that restates the tool name without adding meaningful context. While it clearly indicates a retrieval action ('Get'), it doesn't specify what 'server' means in this domain or distinguish this tool from sibling tools like 'get_servers' (plural). The purpose is vague beyond the basic verb-noun structure.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_servers' (which likely lists multiple servers) or 'get_server_commands', nor does it specify prerequisites or appropriate contexts for retrieving a single server by ID. Usage is implied but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_server_commandsBRead-onlyInspect
Get remaining actions/commands to execute for a server
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The ID of the server. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and open-world behavior, so the description does not need to repeat these. It adds value by specifying the resource ('remaining actions/commands'), which implies a focus on pending or queued items, but does not detail aspects like response format, pagination, or error handling. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence that efficiently conveys the tool's purpose without unnecessary words. It is front-loaded with the key action and resource, making it easy to understand at a glance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (one parameter with full coverage) and annotations covering safety aspects, the description is minimally adequate. However, without an output schema, it does not explain what is returned (e.g., list of commands, status details), which could be helpful for an agent. The description meets basic needs but lacks depth for full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a clear description for the 'id' parameter. The description does not add any additional meaning beyond the schema, such as explaining what the 'id' refers to (e.g., server identifier) or constraints. With high schema coverage, a baseline score of 3 is appropriate as the schema handles the parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and the resource ('remaining actions/commands to execute for a server'), making the purpose evident. However, it does not explicitly differentiate from sibling tools like 'post_server_commands' or 'get_server_by_id', which slightly limits its specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. For instance, it does not clarify if this is for checking pending commands after using 'post_server_commands' or how it relates to other 'get_' tools like 'get_servers'. The description lacks context on prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_serversBRead-onlyInspect
Get servers (paginated)
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page number to retrieve. | |
| max_page | No | The maximum number of elements per page (5–50). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and open-world hints, covering safety and scope. The description adds pagination as a behavioral trait, which is useful context not in annotations, but doesn't detail return format, error handling, or rate limits, so it adds some value but not rich behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—just three words—and front-loaded with the core purpose. Every word earns its place, with no wasted text, making it efficient for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (simple paginated list), annotations cover safety and scope, and schema fully documents inputs, the description is minimally adequate. However, without an output schema, it doesn't explain return values or error cases, leaving some gaps in completeness for a list operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for 'page' and 'max_page' parameters. The description mentions pagination, aligning with the schema, but doesn't add extra meaning like default behavior or usage tips beyond what the schema provides, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('servers'), and specifies pagination, which is a key behavioral trait. However, it doesn't differentiate from sibling tools like 'get_server_by_id' or 'get_server_commands', which target specific servers or server commands respectively, so it misses full sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. For example, it doesn't mention using 'get_server_by_id' for a specific server or 'get_products' for different resources, leaving the agent to infer usage from context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_store_infoBRead-onlyInspect
Get general information about the store
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only, non-destructive, and open-world behavior, which the description doesn't contradict. The description adds value by specifying 'general information,' implying a broad overview rather than detailed data, which helps contextualize the tool's scope beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words. It's front-loaded and efficiently conveys the core purpose, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema) and rich annotations, the description is adequate but minimal. It lacks details on what 'general information' includes or the response format, which could be helpful for an agent to understand the output better.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema coverage, the schema fully documents the lack of inputs. The description adds meaning by implying the tool retrieves a default set of information without needing parameters, which is appropriate and compensates for the simplicity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get general information about the store' clearly states the verb ('Get') and resource ('store'), making the purpose understandable. However, it's vague about what 'general information' entails and doesn't differentiate from sibling tools like 'get_store_theme' or 'get_categories', which also retrieve store-related data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for usage, or exclusions, leaving the agent to infer based on tool names alone without explicit direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_store_themeBRead-onlyInspect
Get general information about the store theme
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already cover key behavioral traits (read-only, non-destructive, open-world), so the bar is lower. The description adds minimal context by specifying 'general information,' but it doesn't elaborate on what that includes, rate limits, or authentication needs. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any fluff or unnecessary details. It's front-loaded and wastes no words, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema) and rich annotations, the description is adequate but minimal. It lacks details on what 'general information' entails or how it differs from similar tools, which could help the agent use it more effectively in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the schema fully documents the lack of inputs. The description doesn't need to add parameter details, and it appropriately avoids redundancy, earning a baseline score for tools with no parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('general information about the store theme'), making it understandable. However, it doesn't explicitly distinguish this from sibling tools like 'get_store_info', which might also provide theme-related information, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'get_store_info' that might overlap in functionality, there's no indication of context, prerequisites, or exclusions, leaving the agent to guess based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_subscriptionsARead-onlyInspect
Get subscriptions (paginated, filterable, recurring only, by user identifier)
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page number to retrieve. | |
| max_page | No | The maximum number of elements per page (5–50). | |
| identifier | No | User identifier to filter subscriptions. Same accepted identifiers as for payments. | |
| only_recurring_subscription | No | Include only recurring subscriptions. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open-world, and non-destructive behavior. The description adds valuable context beyond this: it specifies pagination, filtering capabilities, and the 'recurring only' constraint, which are not covered by annotations. No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads key information: the action, resource, and main features (paginated, filterable, recurring only, by user identifier). There is no wasted verbiage, and every word contributes to understanding the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (filtering, pagination) and lack of an output schema, the description provides good context but could benefit from mentioning return format or pagination details. Annotations cover safety aspects, and the description adds operational context, making it largely complete for a read-only tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description mentions filtering by user identifier and recurring subscriptions, which aligns with parameters but doesn't add significant semantic detail beyond what the schema provides. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('subscriptions'), and specifies key characteristics: paginated, filterable, recurring only, and by user identifier. This distinguishes it from sibling tools like get_payments or get_products, which handle different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to retrieve subscriptions with specific filtering options (recurring, by user). However, it doesn't explicitly state when not to use it or name alternatives among siblings, such as get_payments for payment-related data instead of subscriptions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_server_commandsCInspect
Update server commands (mark delivered actions for a server)
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The ID of the server. | |
| delivered_commands_by_id | Yes | Map of command-group IDs to delivery results. Keys MUST be like `pay_<id>` or `sub_<id>`. DO NOT send an array. Examples: Single command: { "pay_603191": { "action": "payment", "cmds": { "0": "Executed" } } } Multiple commands: { "pay_71524": { "action": "payment", "cmds": { "123": "Executed", "124": "Not Executed" } } } |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. 'Update' implies a mutation operation, but the description doesn't disclose important behavioral aspects: whether this requires specific permissions, if it's idempotent, what happens on partial failures, or what the response looks like. It mentions 'mark delivered' but doesn't explain the business context or consequences.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that gets straight to the point. It's appropriately sized for what it covers, though it could be more comprehensive given the lack of annotations. No wasted words or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is insufficient. It doesn't explain what happens after marking commands as delivered, what errors might occur, or the business workflow context. The agent would need to guess about the operation's effects and response format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds no additional parameter semantics beyond what's in the schema - it doesn't explain the business meaning of 'delivered commands' or provide context about the command-group ID patterns. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update server commands') and the specific operation ('mark delivered actions for a server'), which is more specific than just the tool name. However, it doesn't explicitly differentiate this from sibling tools like 'get_server_commands' or explain how this update differs from other update tools in the list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (like needing to first retrieve commands via 'get_server_commands'), nor does it explain the relationship with sibling tools. There's no 'when-not' or alternative tool guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_discount_couponCInspect
Update a store coupon
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The ID of the coupon to update. | |
| payload | Yes | Fields to update on the coupon. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. 'Update a store coupon' implies a mutation operation but reveals nothing about permissions required, whether updates are reversible, rate limits, error conditions, or what happens to unspecified fields. For a mutation tool with zero annotation coverage, this leaves critical behavioral traits undocumented.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action ('Update') and resource ('store coupon'), making it immediately scannable. Every word earns its place, achieving maximum clarity in minimal space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is insufficiently complete. It doesn't address behavioral aspects like side effects, error handling, or response format, nor does it provide usage context. The agent lacks critical information needed to invoke this tool safely and effectively, given its complexity (nested payload object with multiple fields).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters ('id' and 'payload') well-documented in the schema. The description adds no additional parameter semantics beyond what the schema already provides—it doesn't clarify parameter relationships, constraints, or examples. With high schema coverage, the baseline score of 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Update a store coupon' clearly states the verb ('update') and resource ('store coupon'), making the tool's purpose immediately understandable. It distinguishes itself from sibling tools like 'create_discount_coupon' and 'update_discount_giftcard' by specifying it updates existing coupons rather than creating new ones or modifying gift cards.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing coupon ID), differentiate from 'create_discount_coupon' beyond the obvious create/update distinction, or specify when to choose this over other discount-related tools. The agent must infer usage from the tool name and schema alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_discount_giftcardCInspect
Update a store gift card
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The ID of the gift card. | |
| payload | Yes | Fields to update on the gift card. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states 'Update' which implies mutation, but doesn't disclose behavioral traits like permission requirements, whether updates are reversible, rate limits, or what happens to unspecified fields. This is inadequate for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It's front-loaded and appropriately sized for the tool's complexity, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a mutation tool with no annotations and no output schema, the description is insufficient. It doesn't explain what the update operation returns, error conditions, or side effects. For a tool that modifies data, more context is needed to ensure safe and correct usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are fully documented in the schema. The description adds no additional meaning beyond 'Update a store gift card', which aligns with the schema but doesn't provide extra context like examples or edge cases. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and resource ('a store gift card'), making the tool's purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'update_discount_coupon' or specify what aspects can be updated beyond the generic term.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'create_discount_giftcard' or 'update_discount_coupon'. The description lacks context about prerequisites, such as needing an existing gift card ID, or exclusions for usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!