Skip to main content
Glama

Server Details

AI commerce for Shopify: product search, comparison, recommendations, and checkout via MCP.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 11 of 11 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes with clear boundaries, such as 'search_products' for browsing vs. 'skincare_recommend' for personalized advice. However, 'skincare_cart' overlaps slightly with 'create_checkout' as both handle checkout creation, though 'skincare_cart' adds recommendation features, which could cause minor confusion.

Naming Consistency3/5

The naming is mixed with some consistent patterns like verb_noun (e.g., 'check_inventory', 'compare_products') and descriptive names (e.g., 'skincare_recommend'), but there are deviations such as 'debug_widget_test' and 'skincare_report_issue' that don't follow a uniform style, reducing overall consistency.

Tool Count5/5

With 11 tools, the count is well-suited for an e-commerce and skincare recommendation server. It covers core functions like product search, inventory, recommendations, and checkout without being overwhelming, ensuring each tool has a clear role in the workflow.

Completeness4/5

The toolset provides comprehensive coverage for e-commerce operations, including product lookup, inventory, comparison, checkout, and personalized recommendations. A minor gap is the lack of tools for post-purchase actions like order tracking or returns, but the core shopping experience is well-supported.

Available Tools

11 tools
check_compatibilityA
Read-only
Inspect

Check which products are compatible with a given product. For devices, shows required consumables (e.g., conductive gel for MIRA). For topicals, shows which devices they work with. Use when a customer asks 'what gel do I need with MIRA?' or 'does this serum work with CryoSculpt?'

ParametersJSON Schema
NameRequiredDescriptionDefault
productYesProduct name or SKU to check compatibility for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, so the agent knows this is a safe, closed-world read operation. The description adds useful context about the bidirectional nature of compatibility checks (devices→consumables and topicals→devices), which goes beyond annotations, but doesn't provide details about response format, error handling, or performance characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three sentences: first states the core purpose, second explains the bidirectional logic, third provides concrete usage examples. Every sentence adds value without redundancy, and the most critical information (what the tool does) is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read-only tool with good annotations but no output schema, the description provides adequate context about purpose, usage, and compatibility logic. However, it doesn't describe the return format (e.g., list of compatible products with details), which would be helpful given the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with a single well-documented parameter ('product name or SKU to check compatibility for'). The description adds marginal value by implying the parameter accepts both product names and SKUs, but doesn't provide additional syntax, format, or validation details beyond what the schema already specifies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('check which products are compatible') and resources ('products'), distinguishing it from siblings like 'get_product' or 'search_products' by focusing on compatibility relationships rather than product details or search functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides when-to-use guidance with concrete examples ('when a customer asks...'), including specific scenarios for devices and topicals, and implicitly distinguishes it from alternatives by focusing on compatibility rather than inventory, comparison, or other sibling functions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_inventoryA
Read-only
Inspect

Check if a product is currently available. Uses Shopify Storefront API to verify real-time stock status. Use when a customer asks 'is MIRA in stock?' or before recommending a product.

ParametersJSON Schema
NameRequiredDescriptionDefault
productYesProduct name or SKU to check availability for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable context by specifying it uses the Shopify Storefront API for real-time stock status, which is useful behavioral information beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by implementation details and usage examples in just two sentences. Every sentence adds value without redundancy, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with good annotations and full schema coverage, the description is mostly complete. It lacks output schema details (e.g., what the return value looks like), but given the tool's simplicity and annotations, this is a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'product' documented as 'Product name or SKU to check availability for'. The description adds no additional parameter details beyond what the schema provides, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('check') and resource ('product availability'), and distinguishes it from siblings like 'get_product' or 'search_products' by focusing on real-time stock verification rather than general product information retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('when a customer asks 'is MIRA in stock?' or before recommending a product') and implies when not to use it (e.g., for general product info, use 'get_product' or 'search_products'), providing clear contextual guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_productsA
Read-only
Inspect

Compare two or more products side by side. Use when the user asks to compare, says 'X vs Y', or wants to decide between options. Do not use for single product lookup — use get_product instead. Returns structured comparison with shared attributes, differences, tradeoffs, and a decision hint.

ParametersJSON Schema
NameRequiredDescriptionDefault
productsYesProduct titles or SKUs to compare (e.g. ['MIRA', 'CryoSculpt'])
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, covering safety and scope. The description adds valuable context about the return format ('structured comparison with shared attributes, differences, tradeoffs, and a decision hint'), which goes beyond annotations. No contradictions exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by usage guidelines and return format. Every sentence adds value without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations covering safety and scope, and the description's clear purpose, usage guidelines, and return format details, it provides complete context for effective tool selection and invocation without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the 'products' parameter fully documented in the schema. The description does not add any additional parameter details beyond what the schema provides, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'compare' with the resource 'products', specifies 'two or more products side by side', and explicitly distinguishes from the sibling tool 'get_product' for single product lookups. This provides specific differentiation from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use ('when the user asks to compare, says X vs Y, or wants to decide between options') and when not to use ('Do not use for single product lookup — use get_product instead'), providing clear alternatives and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_checkoutAInspect

Create a checkout URL for one or more products. Pass variant IDs and quantities. Supports discount codes, cart notes, and selling plans. Do not use unless the user wants to buy — use search_products or skincare_recommend first. Returns a direct Shopify checkout link the user can click to buy.

ParametersJSON Schema
NameRequiredDescriptionDefault
noteNoCart note visible to the merchant
itemsYesProducts to add to cart
discount_codeNoDiscount code to apply (e.g. 'WELCOME10')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies the tool returns a 'direct Shopify checkout link the user can click to buy' (output format), mentions support for discount codes, cart notes, and selling plans (capabilities), and emphasizes the purchase intent requirement. Annotations cover safety (readOnlyHint=false, destructiveHint=false, openWorldHint=true) but the description provides practical usage context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences with zero waste: first states purpose and key parameters, second provides crucial usage guidance, third specifies return value. Every sentence earns its place and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a purchase tool with no output schema, the description does well by specifying the return format ('direct Shopify checkout link'). It covers the purchase intent requirement and provides clear usage guidance. The main gap is lack of information about error cases or what happens with invalid inputs, but given the good annotations and schema coverage, it's mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 3 parameters thoroughly. The description mentions 'variant IDs and quantities' and lists 'discount codes, cart notes, and selling plans' which aligns with schema parameters but doesn't add significant semantic value beyond what's already in the structured schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Create a checkout URL'), the resource ('for one or more products'), and distinguishes from siblings by specifying this is for purchase intent rather than browsing/searching. It explicitly mentions variant IDs and quantities as key inputs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when NOT to use this tool ('Do not use unless the user wants to buy') and names two specific alternatives to use first (search_products, skincare_recommend). This gives clear context for tool selection among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deals_discountsA
Read-only
Inspect

Show available bundles, deals, and ask about discount codes. Use when a customer asks about deals, bundles, savings, or says 'do you have any discounts?' Also use when multiple items are in cart to suggest bundle savings. Always ask if the customer has a discount code.

ParametersJSON Schema
NameRequiredDescriptionDefault
discount_codeNoCustomer's discount code if they have one
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, non-destructive, and closed-world operations, the description specifies that the tool will 'ask about discount codes' and includes the procedural instruction to 'Always ask if the customer has a discount code.' This reveals interactive or conversational behavior not captured in annotations. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise and well-structured. It uses three clear sentences: one stating the core purpose, one detailing usage scenarios, and one providing procedural guidance. Each sentence adds distinct value without redundancy. While efficient, it could be slightly more front-loaded by leading with the most critical information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (one optional parameter, no output schema), the description provides good contextual completeness. It covers purpose, usage guidelines, and behavioral aspects effectively. The main gap is the lack of information about return values or output format, but since there's no output schema, this is an expected limitation rather than a description flaw.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the single parameter 'discount_code.' The description doesn't add any additional semantic information about parameters beyond what's in the schema. According to scoring rules, when schema coverage is high (>80%), the baseline score is 3 even without parameter details in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to show available bundles, deals, and ask about discount codes. It specifies the verb ('show' and 'ask about') and resources ('bundles, deals, discount codes'), making it easy to understand what the tool does. However, it doesn't explicitly distinguish this tool from potential sibling tools like 'search_products' or 'compare_products' that might also relate to product offerings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides excellent usage guidance with explicit when-to-use scenarios: when customers ask about deals, bundles, savings, specific discount questions, or when multiple items are in cart. It also includes procedural guidance ('Always ask if the customer has a discount code'), though this is more behavioral than comparative. While it doesn't name specific alternatives among siblings, the context signals are clear and comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

debug_widget_testAInspect

Test tool to verify widget rendering. Returns a minimal widget with no data dependencies.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it's a test tool (implying non-destructive, safe for debugging), returns a minimal widget, and has no data dependencies (suggesting it doesn't rely on external data sources). This adds useful context beyond the input schema, though it could detail more about the return format or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and highly concise: two clear sentences with zero waste. Every phrase ('Test tool to verify widget rendering', 'Returns a minimal widget with no data dependencies') directly contributes to understanding the tool's purpose and behavior, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is fairly complete. It covers purpose, behavior, and output characteristics. However, it lacks details on potential errors or the exact format of the returned widget, which could be helpful for an agent invoking it in varied contexts.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description adds value by explaining the tool's behavior ('Returns a minimal widget with no data dependencies'), which compensates for the lack of output schema. This goes beyond the schema, earning a score above the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Test tool to verify widget rendering.' It specifies the verb ('verify') and resource ('widget rendering'), making it understandable. However, it doesn't differentiate from sibling tools (e.g., 'check_compatibility' or 'search_products'), which are unrelated to debugging or testing, so it doesn't fully distinguish itself in context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for testing widget rendering, but provides no explicit guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, exclusions, or compare to sibling tools, leaving the agent to infer context from the tool name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_productA
Read-only
Inspect

Get full details for a specific product by SKU or title. Use when the user asks about a specific product by name (e.g. 'tell me about MIRA', 'show me the serum'). Do not use for browsing or recommendations — use search_products or skincare_recommend. Returns a widget card with the product details, image, price, and checkout button.

ParametersJSON Schema
NameRequiredDescriptionDefault
skuNoExact product SKU (e.g. 'LL-4632379916336')
titleNoProduct title to search for (fuzzy match)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies the return format ('widget card with product details, image, price, and checkout button'), which is not covered by the annotations (readOnlyHint, openWorldHint, destructiveHint). While annotations cover safety aspects, the description provides practical output information that helps the agent understand what to expect. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: purpose, usage guidelines, and return format. Each sentence adds essential information without redundancy. It's front-loaded with the core functionality and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations (readOnlyHint, openWorldHint, destructiveHint), and 100% schema coverage, the description is complete. It covers purpose, usage guidelines, and output behavior. While there's no output schema, the description adequately explains the return format, making it sufficient for an AI agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents both parameters (sku and title) thoroughly. The description adds minimal value by mentioning 'SKU or title' and implying fuzzy matching for title, but doesn't provide additional syntax or format details beyond what the schema specifies. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get full details') and resource ('product'), and distinguishes it from siblings by specifying it's for a 'specific product by SKU or title' rather than browsing or recommendations. It explicitly names alternative tools (search_products, skincare_recommend) for different use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('when the user asks about a specific product by name') and when not to use it ('Do not use for browsing or recommendations'), with named alternatives (search_products, skincare_recommend). It includes concrete examples ('tell me about MIRA', 'show me the serum') to illustrate appropriate contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_productsA
Read-only
Inspect

Browse and search the product catalog. Use when the user wants to see what's available, look up specific products, browse by category, compare options, or asks 'show me' / 'what do you have.' Do not use when the user needs personalized recommendations based on skin concerns — use skincare_recommend instead. Returns all matching products with prices, images, and checkout. Unlike skincare_recommend, this does not score or filter — it shows everything that matches so the user can decide.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query (e.g. 'vitamin c serum', 'anti-aging', 'moisturizer under $50')
categoryNoFilter by exact product category from the catalog (e.g. 'serum', 'treatment', 'cleanser', 'moisturizer'). Do not guess categories — only use this if the user explicitly mentions a catalog category. For general queries like 'devices' or 'bundles', use the query parameter instead.
max_priceNoFilter to products at or below this price
max_resultsNoMaximum products to return (default 10). Only set this if the user specifies a count — e.g. 'show me 2 devices' → 2. Otherwise leave it unset and the default will return all relevant matches up to 10.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, covering basic safety and scope. The description adds valuable behavioral context beyond annotations by specifying that it 'returns all matching products with prices, images, and checkout' and 'does not score or filter,' which helps the agent understand the tool's behavior and limitations. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three sentences that each serve a distinct purpose: stating the tool's purpose, providing usage guidelines, and clarifying behavioral differences from siblings. There's no wasted language, and key information is front-loaded in the first sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema), the description provides good contextual completeness. It covers purpose, usage guidelines, and behavioral traits, though it doesn't detail return format specifics (only mentions 'prices, images, and checkout'). With annotations covering safety aspects and schema covering parameters, the description adds meaningful value where structured fields don't.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all four parameters thoroughly. The description doesn't add significant parameter semantics beyond what's in the schema, though it implies the tool handles general browsing and searching. The baseline score of 3 is appropriate when the schema does the heavy lifting for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('browse and search the product catalog') and resources ('product catalog'). It explicitly distinguishes from sibling skincare_recommend by stating this tool 'does not score or filter — it shows everything that matches so the user can decide,' providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use ('when the user wants to see what's available, look up specific products, browse by category, compare options, or asks 'show me' / 'what do you have'') and when not to use ('Do not use when the user needs personalized recommendations based on skin concerns — use skincare_recommend instead'). It names the specific alternative tool, making usage boundaries clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skincare_cartAInspect

Create a buyable shopping cart with a real checkout URL. Two modes: (1) Pass 'products' array with specific product names. (2) Pass 'query' string to auto-recommend and cart. Do not use for browsing or recommendations — use search_products or skincare_recommend first. Returns a widget with the cart items and a working checkout link.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoNatural language query to auto-recommend products. Only used if products array is not provided.
productsNoSpecific product titles to add to the cart
strategyNoOptional offer strategy override when using query mode
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=false, openWorldHint=true, and destructiveHint=false, which the description aligns with by implying a creation action without contradiction. The description adds valuable context beyond annotations, such as the two operational modes, the return of a widget with checkout link, and the exclusion of browsing use, enhancing behavioral understanding without repeating annotation info.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by mode details and usage guidelines, all in three concise sentences. Every sentence adds value, such as distinguishing from siblings and specifying return format, with no wasted words, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema) and rich annotations, the description is largely complete, covering purpose, usage, modes, and return format. However, it could slightly enhance completeness by mentioning potential errors or constraints like the max items in products, though annotations and schema help mitigate this gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all parameters (query, products, strategy) thoroughly. The description adds some semantic context by explaining the two modes and their interplay, but it doesn't provide significant additional meaning beyond what the schema offers, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Create a buyable shopping cart') and resource ('shopping cart'), distinguishing it from siblings like search_products or skincare_recommend by emphasizing its checkout functionality. It explicitly mentions two operational modes, making the purpose distinct and comprehensive.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (for creating a cart with checkout) versus alternatives, naming search_products and skincare_recommend for browsing/recommendations. It also outlines two specific modes and advises not to use it for browsing, offering clear context and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skincare_recommendA
Read-only
Inspect

Get a personalized skincare recommendation with ingredient-aware scoring, safety notes, and routine building. Use when the user wants advice, has a skin concern, or asks what to buy. Do not use for browsing or listing products — use search_products instead. Returns scored products with checkout URLs.

ParametersJSON Schema
NameRequiredDescriptionDefault
brandNoFilter to a specific brand only (e.g. 'Youth to the People', 'CeraVe', 'The Ordinary'). Use when the user asks for products from a specific brand.
queryYesNatural language query about skin concerns (e.g. 'I have oily acne-prone skin and want something gentle under $30')
strategyNoOptional offer strategy override: starter, gentle, budget, glow_safe, minimal, strong, fallback
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable behavioral context by specifying the return format ('scored products with checkout URLs') and the recommendation nature ('personalized skincare recommendation'), which goes beyond the annotations. No contradiction exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by usage guidelines and return details. Every sentence adds value without redundancy, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (personalized recommendations with multiple parameters), annotations cover safety and scope, and the description adds usage context and return format. However, without an output schema, the description could benefit from more detail on the structure of 'scored products' (e.g., scoring criteria), leaving a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the three parameters. The description does not add any parameter-specific details beyond what the schema provides, such as explaining interactions between parameters. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get a personalized skincare recommendation') and resources ('ingredient-aware scoring, safety notes, and routine building'). It distinguishes from sibling tools by explicitly contrasting with 'search_products' for browsing/listing, making the differentiation clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use ('when the user wants advice, has a skin concern, or asks what to buy') and when not to use ('Do not use for browsing or listing products'). It names a specific alternative ('use search_products instead'), offering clear context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skincare_report_issueAInspect

Report when a tool result was unhelpful, incomplete, or wrong. Call this whenever you override a recommendation, skip a cart result, or notice the engine output doesn't match what the user needs. Do not use proactively — only when you observe an actual issue. This helps improve the engine.

ParametersJSON Schema
NameRequiredDescriptionDefault
tool_nameYesWhich tool had the issue (skincare_recommend, skincare_cart, skincare_report_issue)
issue_typeYesType of issue
user_queryNoThe original user query if available
descriptionYesWhat went wrong and what you expected instead
expected_productsNoWhat products should have been recommended
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the annotations: it explains the tool's purpose (feedback reporting for engine improvement), specifies it should only be used reactively when issues are observed, and clarifies the types of issues to report. The annotations (readOnlyHint=false, destructiveHint=false) already indicate this is a non-destructive write operation, but the description provides important usage constraints that aren't captured in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with three focused sentences that each serve a distinct purpose: stating the tool's function, providing usage guidelines, and explaining the benefit. There's no wasted language, and the most critical information (what the tool does) appears first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, 3 required) and the comprehensive schema coverage (100%), the description provides exactly what's needed: clear purpose, specific usage guidelines, and behavioral context. The absence of an output schema is acceptable since this appears to be a feedback submission tool where the return value is less critical than the action itself.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 5 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. However, it does provide context about what constitutes valid 'issue_type' values by listing example scenarios (unhelpful, incomplete, wrong results), which slightly enhances understanding of the parameters' purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('report when a tool result was unhelpful, incomplete, or wrong') and distinguishes it from all sibling tools, which are focused on product operations rather than feedback reporting. It explicitly names the specific scenarios where it should be used (overriding recommendations, skipping cart results, engine output mismatches).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use ('whenever you override a recommendation, skip a cart result, or notice the engine output doesn't match what the user needs') and when not to use ('Do not use proactively — only when you observe an actual issue'). It also implicitly distinguishes from siblings by focusing on issue reporting rather than product operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources