Skip to main content
Glama

Server Details

AI commerce for Shopify: product search, comparison, recommendations, and checkout via MCP.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes with clear boundaries, such as search_products for browsing and skincare_recommend for personalized advice. However, create_checkout and skincare_cart both handle checkout functionality, which could cause confusion about when to use each, as they overlap in creating purchase links.

Naming Consistency3/5

The naming is mixed with some tools using verb_noun patterns like compare_products and get_product, while others like skincare_cart and skincare_recommend use a noun-based prefix. This inconsistency is noticeable but still readable, as most names are descriptive of their functions.

Tool Count5/5

With 7 tools, the count is well-scoped for an e-commerce and skincare recommendation server. Each tool serves a specific role, such as product lookup, comparison, checkout, and issue reporting, making the set comprehensive without being overwhelming.

Completeness4/5

The tool set covers key e-commerce workflows like searching, comparing, recommending, and purchasing products, with a useful issue reporting tool. A minor gap is the lack of tools for updating or managing user profiles or orders, but core shopping and recommendation tasks are well-supported.

Available Tools

8 tools
compare_productsA
Read-only
Inspect

Compare two or more products side by side. Use when the user asks to compare, says 'X vs Y', or wants to decide between options. Do not use for single product lookup — use get_product instead. Returns structured comparison with shared attributes, differences, tradeoffs, and a decision hint.

ParametersJSON Schema
NameRequiredDescriptionDefault
productsYesProduct titles or SKUs to compare (e.g. ['MIRA', 'CryoSculpt'])
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, covering safety and scope. The description adds valuable context about the return format ('structured comparison with shared attributes, differences, tradeoffs, and a decision hint'), which goes beyond annotations. No contradictions exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by usage guidelines and return format. Every sentence adds value without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations covering safety and scope, and the description's clear purpose, usage guidelines, and return format details, it provides complete context for effective tool selection and invocation without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the 'products' parameter fully documented in the schema. The description does not add any additional parameter details beyond what the schema provides, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'compare' with the resource 'products', specifies 'two or more products side by side', and explicitly distinguishes from the sibling tool 'get_product' for single product lookups. This provides specific differentiation from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use ('when the user asks to compare, says X vs Y, or wants to decide between options') and when not to use ('Do not use for single product lookup — use get_product instead'), providing clear alternatives and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_checkoutAInspect

Create a checkout URL for one or more products. Pass variant IDs (items) and/or product URLs (product_urls). When a product URL is provided (e.g. https://laluer.com/products/mira), the tool resolves it to a variant ID automatically — no catalog import needed. Supports discount codes, cart notes, and selling plans. Do not use unless the user wants to buy — use search_products or skincare_recommend first. Returns a direct Shopify checkout link the user can click to buy.

ParametersJSON Schema
NameRequiredDescriptionDefault
noteNoCart note visible to the merchant
itemsNoProducts to add to cart by variant ID
product_urlsNoProducts to add to cart by URL — resolved to variant IDs automatically
discount_codeNoDiscount code to apply (e.g. 'WELCOME10')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies the tool returns a 'direct Shopify checkout link the user can click to buy' (output format), mentions support for discount codes, cart notes, and selling plans (capabilities), and emphasizes the purchase intent requirement. Annotations cover safety (readOnlyHint=false, destructiveHint=false, openWorldHint=true) but the description provides practical usage context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences with zero waste: first states purpose and key parameters, second provides crucial usage guidance, third specifies return value. Every sentence earns its place and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a purchase tool with no output schema, the description does well by specifying the return format ('direct Shopify checkout link'). It covers the purchase intent requirement and provides clear usage guidance. The main gap is lack of information about error cases or what happens with invalid inputs, but given the good annotations and schema coverage, it's mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 3 parameters thoroughly. The description mentions 'variant IDs and quantities' and lists 'discount codes, cart notes, and selling plans' which aligns with schema parameters but doesn't add significant semantic value beyond what's already in the structured schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Create a checkout URL'), the resource ('for one or more products'), and distinguishes from siblings by specifying this is for purchase intent rather than browsing/searching. It explicitly mentions variant IDs and quantities as key inputs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when NOT to use this tool ('Do not use unless the user wants to buy') and names two specific alternatives to use first (search_products, skincare_recommend). This gives clear context for tool selection among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_productA
Read-only
Inspect

Get full details for a specific product by SKU or title. Use when the user asks about a specific product by name (e.g. 'tell me about MIRA', 'show me the serum'). Do not use for browsing or recommendations — use search_products or skincare_recommend. Returns a widget card with the product details, image, price, and checkout button.

ParametersJSON Schema
NameRequiredDescriptionDefault
skuNoExact product SKU (e.g. 'LL-4632379916336')
titleNoProduct title to search for (fuzzy match)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies the return format ('widget card with product details, image, price, and checkout button'), which is not covered by the annotations (readOnlyHint, openWorldHint, destructiveHint). While annotations cover safety aspects, the description provides practical output information that helps the agent understand what to expect. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: purpose, usage guidelines, and return format. Each sentence adds essential information without redundancy. It's front-loaded with the core functionality and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations (readOnlyHint, openWorldHint, destructiveHint), and 100% schema coverage, the description is complete. It covers purpose, usage guidelines, and output behavior. While there's no output schema, the description adequately explains the return format, making it sufficient for an AI agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents both parameters (sku and title) thoroughly. The description adds minimal value by mentioning 'SKU or title' and implying fuzzy matching for title, but doesn't provide additional syntax or format details beyond what the schema specifies. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get full details') and resource ('product'), and distinguishes it from siblings by specifying it's for a 'specific product by SKU or title' rather than browsing or recommendations. It explicitly names alternative tools (search_products, skincare_recommend) for different use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('when the user asks about a specific product by name') and when not to use it ('Do not use for browsing or recommendations'), with named alternatives (search_products, skincare_recommend). It includes concrete examples ('tell me about MIRA', 'show me the serum') to illustrate appropriate contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommendA
Read-only
Inspect

Get a personalized product recommendation with domain-expert scoring, safety notes, and transaction authority. Use when the user wants advice, has a concern, or asks what to buy. Returns scored products with checkout URLs, safety assessment, and authority state (SHOULD/CAN/SHOULDNT/ESCALATE/CANT).

ParametersJSON Schema
NameRequiredDescriptionDefault
brandNoFilter to a specific brand
queryYesNatural language query about product needs
domainNoProduct domain (e.g. 'skincare', 'beauty_devices'). Auto-detected from merchant if omitted.
strategyNoOptional offer strategy override
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds value beyond annotations by detailing output components: scored products, checkout URLs, safety assessment, authority state (SHOULD/CAN/SHOULDNT/ESCALATE/CANT). Annotations already declare readOnlyHint, and description aligns, providing richer behavioral contract.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, each earning its place: first defines core function, second adds usage context and output summary. No redundancy, front-loaded with key purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Describes output components (scored products, checkout URLs, safety, authority state) despite no output schema. Could mention volume or pagination, but current detail is sufficient for agent to understand expected return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All parameters are fully described in the input schema (100% coverage). The tool description does not add extra meaning to parameters beyond the schema descriptions, so baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides personalized product recommendations with domain-expert scoring, safety notes, and transaction authority. It specifies when to use it (when user wants advice, concern, or asks what to buy), distinguishing it from sibling tools like search_products that lack scoring and authority output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use when the user wants advice, has a concern, or asks what to buy.' This provides clear usage context. However, it does not explicitly mention when not to use or name alternative tools, though the context is strong enough for inference.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_productsA
Read-only
Inspect

Browse and search the product catalog. Use when the user wants to see what's available, look up specific products, browse by category, compare options, or asks 'show me' / 'what do you have.' Do not use when the user needs personalized recommendations based on skin concerns — use skincare_recommend instead. Returns all matching products with prices, images, and checkout. Unlike skincare_recommend, this does not score or filter — it shows everything that matches so the user can decide.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query (e.g. 'vitamin c serum', 'anti-aging', 'moisturizer under $50')
categoryNoFilter by exact product category from the catalog (e.g. 'serum', 'treatment', 'cleanser', 'moisturizer'). Do not guess categories — only use this if the user explicitly mentions a catalog category. For general queries like 'devices' or 'bundles', use the query parameter instead.
max_priceNoFilter to products at or below this price
max_resultsNoMaximum products to return (default 10). Only set this if the user specifies a count — e.g. 'show me 2 devices' → 2. Otherwise leave it unset and the default will return all relevant matches up to 10.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, covering basic safety and scope. The description adds valuable behavioral context beyond annotations by specifying that it 'returns all matching products with prices, images, and checkout' and 'does not score or filter,' which helps the agent understand the tool's behavior and limitations. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three sentences that each serve a distinct purpose: stating the tool's purpose, providing usage guidelines, and clarifying behavioral differences from siblings. There's no wasted language, and key information is front-loaded in the first sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema), the description provides good contextual completeness. It covers purpose, usage guidelines, and behavioral traits, though it doesn't detail return format specifics (only mentions 'prices, images, and checkout'). With annotations covering safety aspects and schema covering parameters, the description adds meaningful value where structured fields don't.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all four parameters thoroughly. The description doesn't add significant parameter semantics beyond what's in the schema, though it implies the tool handles general browsing and searching. The baseline score of 3 is appropriate when the schema does the heavy lifting for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('browse and search the product catalog') and resources ('product catalog'). It explicitly distinguishes from sibling skincare_recommend by stating this tool 'does not score or filter — it shows everything that matches so the user can decide,' providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use ('when the user wants to see what's available, look up specific products, browse by category, compare options, or asks 'show me' / 'what do you have'') and when not to use ('Do not use when the user needs personalized recommendations based on skin concerns — use skincare_recommend instead'). It names the specific alternative tool, making usage boundaries clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skincare_cartAInspect

Create a buyable shopping cart with a real checkout URL. Two modes: (1) Pass 'products' array with specific product names. (2) Pass 'query' string to auto-recommend and cart. Do not use for browsing or recommendations — use search_products or skincare_recommend first. Returns a widget with the cart items and a working checkout link.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoNatural language query to auto-recommend products. Only used if products array is not provided.
productsNoSpecific product titles to add to the cart
strategyNoOptional offer strategy override when using query mode
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=false, openWorldHint=true, and destructiveHint=false, which the description aligns with by implying a creation action without contradiction. The description adds valuable context beyond annotations, such as the two operational modes, the return of a widget with checkout link, and the exclusion of browsing use, enhancing behavioral understanding without repeating annotation info.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by mode details and usage guidelines, all in three concise sentences. Every sentence adds value, such as distinguishing from siblings and specifying return format, with no wasted words, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema) and rich annotations, the description is largely complete, covering purpose, usage, modes, and return format. However, it could slightly enhance completeness by mentioning potential errors or constraints like the max items in products, though annotations and schema help mitigate this gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all parameters (query, products, strategy) thoroughly. The description adds some semantic context by explaining the two modes and their interplay, but it doesn't provide significant additional meaning beyond what the schema offers, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Create a buyable shopping cart') and resource ('shopping cart'), distinguishing it from siblings like search_products or skincare_recommend by emphasizing its checkout functionality. It explicitly mentions two operational modes, making the purpose distinct and comprehensive.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (for creating a cart with checkout) versus alternatives, naming search_products and skincare_recommend for browsing/recommendations. It also outlines two specific modes and advises not to use it for browsing, offering clear context and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skincare_recommendA
Read-only
Inspect

(Deprecated: use 'recommend' instead. Works identically.) Get a personalized skincare recommendation with ingredient-aware scoring, safety notes, and routine building. Use when the user wants advice, has a skin concern, or asks what to buy. Do not use for browsing or listing products — use search_products instead. Returns scored products with checkout URLs.

ParametersJSON Schema
NameRequiredDescriptionDefault
brandNoFilter to a specific brand only (e.g. 'Youth to the People', 'CeraVe', 'The Ordinary'). Use when the user asks for products from a specific brand.
queryYesNatural language query about skin concerns (e.g. 'I have oily acne-prone skin and want something gentle under $30')
strategyNoOptional offer strategy override: starter, gentle, budget, glow_safe, minimal, strong, fallback
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable behavioral context by specifying the return format ('scored products with checkout URLs') and the recommendation nature ('personalized skincare recommendation'), which goes beyond the annotations. No contradiction exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by usage guidelines and return details. Every sentence adds value without redundancy, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (personalized recommendations with multiple parameters), annotations cover safety and scope, and the description adds usage context and return format. However, without an output schema, the description could benefit from more detail on the structure of 'scored products' (e.g., scoring criteria), leaving a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the three parameters. The description does not add any parameter-specific details beyond what the schema provides, such as explaining interactions between parameters. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get a personalized skincare recommendation') and resources ('ingredient-aware scoring, safety notes, and routine building'). It distinguishes from sibling tools by explicitly contrasting with 'search_products' for browsing/listing, making the differentiation clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use ('when the user wants advice, has a skin concern, or asks what to buy') and when not to use ('Do not use for browsing or listing products'). It names a specific alternative ('use search_products instead'), offering clear context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skincare_report_issueAInspect

Report when a tool result was unhelpful, incomplete, or wrong. Call this whenever you override a recommendation, skip a cart result, or notice the engine output doesn't match what the user needs. Do not use proactively — only when you observe an actual issue. This helps improve the engine.

ParametersJSON Schema
NameRequiredDescriptionDefault
tool_nameYesWhich tool had the issue (skincare_recommend, skincare_cart, skincare_report_issue)
issue_typeYesType of issue
user_queryNoThe original user query if available
descriptionYesWhat went wrong and what you expected instead
expected_productsNoWhat products should have been recommended
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the annotations: it explains the tool's purpose (feedback reporting for engine improvement), specifies it should only be used reactively when issues are observed, and clarifies the types of issues to report. The annotations (readOnlyHint=false, destructiveHint=false) already indicate this is a non-destructive write operation, but the description provides important usage constraints that aren't captured in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with three focused sentences that each serve a distinct purpose: stating the tool's function, providing usage guidelines, and explaining the benefit. There's no wasted language, and the most critical information (what the tool does) appears first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, 3 required) and the comprehensive schema coverage (100%), the description provides exactly what's needed: clear purpose, specific usage guidelines, and behavioral context. The absence of an output schema is acceptable since this appears to be a feedback submission tool where the return value is less critical than the action itself.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 5 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. However, it does provide context about what constitutes valid 'issue_type' values by listing example scenarios (unhelpful, incomplete, wrong results), which slightly enhances understanding of the parameters' purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('report when a tool result was unhelpful, incomplete, or wrong') and distinguishes it from all sibling tools, which are focused on product operations rather than feedback reporting. It explicitly names the specific scenarios where it should be used (overriding recommendations, skipping cart results, engine output mismatches).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use ('whenever you override a recommendation, skip a cart result, or notice the engine output doesn't match what the user needs') and when not to use ('Do not use proactively — only when you observe an actual issue'). It also implicitly distinguishes from siblings by focusing on issue reporting rather than product operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources