Skip to main content
Glama

La Luer — AI Skincare Commerce

Server Details

Search, compare, and purchase La Luer microcurrent facial devices and skincare products.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 11 of 11 tools scored. Lowest: 3.6/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes with clear boundaries, such as 'skincare_recommend' for personalized advice versus 'search_products' for browsing. However, 'skincare_cart' and 'create_checkout' have some overlap in functionality, as both generate checkout URLs, which could cause minor confusion for an agent. The descriptions help clarify use cases, but the redundancy might lead to misselection in edge cases.

Naming Consistency4/5

The tool names generally follow a consistent snake_case pattern, such as 'check_inventory' and 'compare_products', which aids readability. There are minor deviations like 'debug_widget_test' and 'skincare_cart' that slightly break the verb_noun convention, but overall, the naming is predictable and coherent. No chaotic mixing of styles is present, making it easy to understand the tool set.

Tool Count5/5

With 11 tools, the count is well-scoped for an AI skincare commerce server, covering essential e-commerce functions like product search, recommendations, inventory, and checkout. Each tool serves a specific role, such as 'deals_discounts' for promotions and 'skincare_report_issue' for feedback, ensuring no unnecessary bloat. This number aligns perfectly with the domain's needs, providing comprehensive coverage without overwhelming complexity.

Completeness5/5

The tool set offers complete coverage for skincare commerce, including product discovery (search_products, get_product), personalized recommendations (skincare_recommend), compatibility checks (check_compatibility), inventory management (check_inventory), and transaction handling (create_checkout, skincare_cart). It also includes auxiliary functions like deals and issue reporting, ensuring no gaps in the customer journey from browsing to purchase and support.

Available Tools

11 tools
check_compatibilityA
Read-only
Inspect

Check which products are compatible with a given product. For devices, shows required consumables (e.g., conductive gel for MIRA). For topicals, shows which devices they work with. Use when a customer asks 'what gel do I need with MIRA?' or 'does this serum work with CryoSculpt?'

ParametersJSON Schema
NameRequiredDescriptionDefault
productYesProduct name or SKU to check compatibility for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable context about bidirectional compatibility logic (devices→consumables, topicals→devices) and example use cases, though it doesn't mention rate limits or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with two sentences: the first defines the tool's purpose and bidirectional logic, the second provides usage examples. Every sentence adds value without redundancy, making it front-loaded and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with no output schema, the description adequately covers purpose, usage, and behavioral context. It could be more complete by hinting at return format (e.g., list of compatible products) or error cases, but the annotations and examples provide sufficient guidance for typical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with one parameter ('product') fully documented. The description adds no additional parameter details beyond what the schema provides, such as format examples or validation rules, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('check which products are compatible') and resources ('given product'), distinguishing it from siblings like 'compare_products' or 'get_product' by focusing on bidirectional compatibility relationships rather than direct comparisons or basic retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Use when a customer asks...') with concrete examples ('what gel do I need with MIRA?' or 'does this serum work with CryoSculpt?'), providing clear context for application without needing to specify exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_inventoryA
Read-only
Inspect

Check if a product is currently available. Uses Shopify Storefront API to verify real-time stock status. Use when a customer asks 'is MIRA in stock?' or before recommending a product.

ParametersJSON Schema
NameRequiredDescriptionDefault
productYesProduct name or SKU to check availability for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations by specifying it uses the Shopify Storefront API for real-time stock status, which isn't covered by annotations (readOnlyHint=true, openWorldHint=false, destructiveHint=false). It doesn't contradict annotations, but could mention response format or limitations like API rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by implementation details and usage examples in two efficient sentences. Every sentence adds value without redundancy, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema) and rich annotations, the description is mostly complete. It covers purpose, API source, and usage context, but lacks details on return values (e.g., stock levels vs. boolean) or error handling, which would be helpful despite no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents the single parameter ('product name or SKU'). The description adds no additional parameter details beyond implying it's for availability checking, so it meets the baseline of 3 without compensating further.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('check') and resource ('product availability'), and distinguishes it from siblings by focusing on real-time stock verification rather than product details (get_product), searching (search_products), or recommendations (skincare_recommend).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit guidance on when to use this tool ('when a customer asks 'is MIRA in stock?' or before recommending a product') and implicitly suggests alternatives by specifying its narrow scope (stock status only), contrasting with broader tools like get_product or search_products.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_productsA
Read-only
Inspect

Compare two or more products side by side. Use when the user asks to compare, says 'X vs Y', or wants to decide between options. Do not use for single product lookup — use get_product instead. Returns structured comparison with shared attributes, differences, tradeoffs, and a decision hint.

ParametersJSON Schema
NameRequiredDescriptionDefault
productsYesProduct titles or SKUs to compare (e.g. ['MIRA', 'CryoSculpt'])
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and closed-world behavior, which the description doesn't contradict. The description adds valuable context about the return format ('structured comparison with shared attributes, differences, tradeoffs, and a decision hint'), enhancing transparency beyond annotations. No rate limits or auth needs are mentioned, but the added output details are helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by usage guidelines and return details in three concise sentences. Each sentence earns its place by providing essential information without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (comparison logic), annotations cover safety, and no output schema exists, the description does well by explaining the return format. It could briefly mention limitations like max items, but overall it's complete enough for an agent to use effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the 'products' parameter. The description doesn't add any parameter-specific details beyond what's in the schema, but it implies the parameter's role in comparison. Baseline 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('compare two or more products side by side') and the resource ('products'), distinguishing it from sibling tools like 'get_product' for single product lookup. The verb 'compare' is precise and the scope is well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided on when to use ('when the user asks to compare, says X vs Y, or wants to decide between options') and when not to use ('Do not use for single product lookup — use get_product instead'), including a named alternative. This covers both inclusion and exclusion criteria effectively.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_checkoutAInspect

Create a checkout URL for one or more products. Pass variant IDs and quantities. Supports discount codes, cart notes, and selling plans. Do not use unless the user wants to buy — use search_products or skincare_recommend first. Returns a direct Shopify checkout link the user can click to buy.

ParametersJSON Schema
NameRequiredDescriptionDefault
noteNoCart note visible to the merchant
itemsYesProducts to add to cart
discount_codeNoDiscount code to apply (e.g. 'WELCOME10')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies the tool returns 'a direct Shopify checkout link the user can click to buy' (output format), mentions support for discount codes, cart notes, and selling plans (features), and emphasizes the transactional nature. Annotations cover read/write status but not these specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: first states purpose and key parameters, second provides crucial usage guidance, third describes return value. Each sentence earns its place, and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no output schema, the description helpfully specifies the return format ('direct Shopify checkout link'). It covers purpose, usage constraints, and behavioral context well, though could potentially mention limitations like the 20-item max from the schema. Good overall completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description mentions 'variant IDs and quantities' and lists supported features (discount codes, cart notes, selling plans), but adds minimal semantic value beyond what's already in the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Create a checkout URL') and resource ('for one or more products'), distinguishing it from siblings like search_products or skincare_recommend. It specifies the verb+resource combination precisely.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when NOT to use this tool ('Do not use unless the user wants to buy') and names two specific alternatives ('use search_products or skincare_recommend first'). This provides clear guidance on appropriate context and alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deals_discountsA
Read-only
Inspect

Show available bundles, deals, and ask about discount codes. Use when a customer asks about deals, bundles, savings, or says 'do you have any discounts?' Also use when multiple items are in cart to suggest bundle savings. Always ask if the customer has a discount code.

ParametersJSON Schema
NameRequiredDescriptionDefault
discount_codeNoCustomer's discount code if they have one
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds behavioral context: it prompts the agent to 'Always ask if the customer has a discount code,' which is an action beyond the annotations. However, it doesn't detail response format, rate limits, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, starting with the core purpose. Each sentence adds value: purpose, usage triggers, and a behavioral instruction. It avoids redundancy, though it could be slightly more structured with bullet points or clearer separation of guidelines.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema) and rich annotations, the description is fairly complete. It covers purpose, usage, and a key behavioral instruction. However, it lacks details on output format or error handling, which might be needed for full agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with one parameter ('discount_code') fully documented. The description mentions 'ask about discount codes' but doesn't add semantic details beyond the schema's description. Baseline 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to show available bundles, deals, and ask about discount codes. It uses specific verbs ('show', 'ask about') and identifies resources (bundles, deals, discount codes). However, it doesn't explicitly differentiate from sibling tools like 'search_products' or 'compare_products' that might also relate to product information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'Use when a customer asks about deals, bundles, savings, or says 'do you have any discounts?' Also use when multiple items are in cart to suggest bundle savings.' It gives clear triggers and contexts, including specific phrases and cart states, though it doesn't name alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

debug_widget_testAInspect

Test tool to verify widget rendering. Returns a minimal widget with no data dependencies.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that it 'Returns a minimal widget with no data dependencies,' which adds useful behavioral context about the output and its simplicity. However, it doesn't cover other traits like performance, error handling, or side effects, leaving gaps for a tool with no annotation support.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences that are front-loaded with the purpose ('Test tool to verify widget rendering') followed by additional context. Every word earns its place without redundancy, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no annotations, no output schema), the description is moderately complete. It explains the purpose and output behavior, but for a testing tool, it could benefit from more context on expected use cases or limitations. Without an output schema, the description partially covers return values, but not fully.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter details, which is appropriate. Baseline is 4 for 0 parameters, as it doesn't need to compensate for any schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Test tool to verify widget rendering.' It specifies the verb ('verify') and resource ('widget rendering'), making the intent unambiguous. However, it doesn't differentiate from sibling tools like 'check_compatibility' or 'compare_products', which might also involve testing or verification, so it doesn't reach a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'to verify widget rendering,' suggesting it's for testing purposes. However, it lacks explicit guidance on when to use this tool versus alternatives (e.g., other testing or verification tools in the sibling list) or any exclusions. This makes the guidance implied but not comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_productA
Read-only
Inspect

Get full details for a specific product by SKU or title. Use when the user asks about a specific product by name (e.g. 'tell me about MIRA', 'show me the serum'). Do not use for browsing or recommendations — use search_products or skincare_recommend. Returns a widget card with the product details, image, price, and checkout button.

ParametersJSON Schema
NameRequiredDescriptionDefault
skuNoExact product SKU (e.g. 'LL-4632379916336')
titleNoProduct title to search for (fuzzy match)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable context beyond this: it specifies the return format ('widget card with product details, image, price, and checkout button'), which is not covered by annotations, enhancing behavioral understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by usage guidelines and return details. Every sentence adds value: the first states what it does, the second when to use it, the third when not to use and alternatives, and the fourth describes the output. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity is low (simple lookup), annotations cover safety and scope, and the description adds usage guidelines and output format. With no output schema, the description compensates by explaining the return value. This is complete enough for the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for sku ('Exact product SKU') and title ('Product title to search for (fuzzy match)'). The description adds minimal value beyond the schema, only mentioning 'by SKU or title' without new details, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get full details') and resource ('for a specific product'), specifying it's by SKU or title. It distinguishes from siblings like search_products (for browsing) and skincare_recommend (for recommendations), making it specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states when to use ('when the user asks about a specific product by name') and when not to use ('Do not use for browsing or recommendations'), naming alternatives (search_products, skincare_recommend). This provides clear context and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_productsA
Read-only
Inspect

Browse and search the product catalog. Use when the user wants to see what's available, look up specific products, browse by category, compare options, or asks 'show me' / 'what do you have.' Do not use when the user needs personalized recommendations based on skin concerns — use skincare_recommend instead. Returns all matching products with prices, images, and checkout. Unlike skincare_recommend, this does not score or filter — it shows everything that matches so the user can decide.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query (e.g. 'vitamin c serum', 'anti-aging', 'moisturizer under $50')
categoryNoFilter by exact product category from the catalog (e.g. 'serum', 'treatment', 'cleanser', 'moisturizer'). Do not guess categories — only use this if the user explicitly mentions a catalog category. For general queries like 'devices' or 'bundles', use the query parameter instead.
max_priceNoFilter to products at or below this price
max_resultsNoMaximum products to return (default 10). Only set this if the user specifies a count — e.g. 'show me 2 devices' → 2. Otherwise leave it unset and the default will return all relevant matches up to 10.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, non-destructive operation, the description clarifies that this tool 'shows everything that matches so the user can decide' and 'does not score or filter', which helps the agent understand the tool's approach to results. It also mentions return content ('prices, images, and checkout'), though no output schema exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero wasted sentences. It front-loads the core purpose, provides clear usage guidelines, distinguishes from alternatives, and explains behavioral characteristics. Every sentence adds value and the information is presented in a logical flow from general to specific.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with good annotations and comprehensive schema coverage, the description provides excellent context about when to use it vs alternatives and clarifies the tool's behavioral approach. The main gap is the lack of output schema, but the description partially compensates by mentioning return content ('prices, images, and checkout'). It could be more specific about result format or pagination.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description doesn't add significant parameter semantics beyond what's already in the schema descriptions, though it does provide context about when to use certain parameters ('browse by category' hints at the category parameter). The schema already thoroughly documents each parameter's purpose and usage constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('browse and search the product catalog') and distinguishes it from sibling tools by explicitly contrasting with 'skincare_recommend'. It provides concrete examples of when to use it ('show me', 'what do you have', 'look up specific products'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('when the user wants to see what's available, look up specific products, browse by category, compare options') and when not to use it ('Do not use when the user needs personalized recommendations based on skin concerns — use skincare_recommend instead'). It names the specific alternative tool and explains the functional difference.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skincare_cartAInspect

Create a buyable shopping cart with a real checkout URL. Two modes: (1) Pass 'products' array with specific product names. (2) Pass 'query' string to auto-recommend and cart. Do not use for browsing or recommendations — use search_products or skincare_recommend first. Returns a widget with the cart items and a working checkout link.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoNatural language query to auto-recommend products. Only used if products array is not provided.
productsNoSpecific product titles to add to the cart
strategyNoOptional offer strategy override when using query mode
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-destructive, open-world tool that allows writes (readOnlyHint: false). The description adds valuable behavioral context beyond annotations: it explains the two operational modes, mentions the return format ('widget with cart items and checkout link'), and clarifies the tool's scope (not for browsing/recommendations). While it doesn't detail rate limits or auth requirements, it provides meaningful operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: first sentence states core purpose, second explains the two modes, third provides critical usage guidance, fourth describes the return value. Every sentence earns its place with no redundancy or wasted words. It's appropriately sized and front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, two operational modes), the description provides good context. It explains what the tool does, when to use it, what it returns, and how it differs from siblings. While there's no output schema, the description adequately describes the return value. The main gap is lack of error case or edge case information, but overall it's quite complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all three parameters. The description mentions the two modes (products array vs query) which aligns with schema documentation, but doesn't add significant semantic value beyond what's in the schema. The baseline of 3 is appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Create a buyable shopping cart with a real checkout URL.' It specifies two modes (products array or query string) and distinguishes from siblings by explicitly naming alternatives (search_products, skincare_recommend). This is specific, actionable, and differentiates from related tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: 'Do not use for browsing or recommendations — use search_products or skincare_recommend first.' It clearly states when NOT to use this tool and names specific alternative tools, giving the agent clear decision criteria for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skincare_recommendA
Read-only
Inspect

Get a personalized La Luer product recommendation with ingredient-aware scoring, safety notes, and routine building. Use when the user wants advice on what to buy, needs help choosing between products, has a specific skin concern (acne, aging, dryness, sensitivity, etc.), wants a routine, or asks "what should I use for X." Do not use for browsing or listing products — use search_products instead. Returns scored products with explanations, usage instructions, and Shopify checkout. This tool analyzes ingredients, irritation risk, and product compatibility — use it over search_products when the user needs guidance, not just a product list.

ParametersJSON Schema
NameRequiredDescriptionDefault
brandNoFilter to a specific brand only (e.g. 'Youth to the People', 'CeraVe', 'The Ordinary'). Use when the user asks for products from a specific brand.
queryYesNatural language query about skin concerns (e.g. 'I have oily acne-prone skin and want something gentle under $30')
strategyNoOptional offer strategy override: starter, gentle, budget, glow_safe, minimal, strong, fallback
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, covering safety and scope. The description adds valuable behavioral context beyond annotations: 'This tool analyzes ingredients, irritation risk, and product compatibility' and 'Returns scored products with explanations, usage instructions, and Shopify checkout.' This provides insight into the tool's analytical nature and output format, though it doesn't detail rate limits or auth needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise, with every sentence adding value. It front-loads the purpose, provides usage guidelines, distinguishes from alternatives, and explains behavioral traits without redundancy. No sentence is wasted, making it efficient for an AI agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (personalized recommendations with analysis) and the absence of an output schema, the description does a good job explaining what the tool returns ('scored products with explanations, usage instructions, and Shopify checkout') and its analytical nature. However, it could be more complete by detailing the output structure or error handling, though annotations cover safety aspects adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters (brand, query, strategy) with clear descriptions. The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining how the 'query' parameter interacts with the tool's analysis. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose: 'Get a personalized La Luer product recommendation with ingredient-aware scoring, safety notes, and routine building.' It clearly distinguishes from sibling tools by specifying 'Do not use for browsing or listing products — use search_products instead,' making the distinction explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides comprehensive usage guidelines: 'Use when the user wants advice on what to buy, needs help choosing between products, has a specific skin concern (acne, aging, dryness, sensitivity, etc.), wants a routine, or asks "what should I use for X."' It also explicitly states when not to use it ('Do not use for browsing or listing products') and names the alternative ('use search_products instead'), with additional guidance on when to prefer this tool over search_products.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skincare_report_issueAInspect

Report when a tool result was unhelpful, incomplete, or wrong. Call this whenever you override a recommendation, skip a cart result, or notice the engine output doesn't match what the user needs. Do not use proactively — only when you observe an actual issue. This helps improve the engine.

ParametersJSON Schema
NameRequiredDescriptionDefault
tool_nameYesWhich tool had the issue (skincare_recommend, skincare_cart, skincare_report_issue)
issue_typeYesType of issue
user_queryNoThe original user query if available
descriptionYesWhat went wrong and what you expected instead
expected_productsNoWhat products should have been recommended
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate this is a non-destructive, non-read-only operation, the description explains that this tool helps improve the engine by reporting issues, which provides purpose and impact context that annotations don't cover. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded, with every sentence earning its place. The first sentence establishes the core purpose, the second provides specific usage scenarios, the third gives exclusion criteria, and the fourth explains the benefit - all in four efficient sentences with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's feedback purpose, moderate complexity, and comprehensive schema coverage, the description provides complete contextual information. It explains when to use the tool, what it does, and why it matters, which is sufficient since no output schema exists and annotations cover basic behavioral traits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description doesn't add specific parameter semantics beyond what's in the schema, but it implies the context for parameters like 'tool_name' and 'description' through usage examples. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('report when a tool result was unhelpful, incomplete, or wrong') and distinguishes it from all sibling tools, which are focused on skincare product operations rather than feedback reporting. It explicitly identifies the resource (tool results) and the action (reporting issues).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('whenever you override a recommendation, skip a cart result, or notice the engine output doesn't match what the user needs') and when not to use it ('Do not use proactively — only when you observe an actual issue'). It clearly differentiates this feedback mechanism from the operational sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources