SAVORDISH
Server Details
AI-powered recipe platform: 18 MCP tools for meal planning, grocery lists & Instacart.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
31 toolsaccount_api_key_statusARead-onlyIdempotentInspect
Check the status of the API key you're using right now — see call count, rate limit, and creation date. Useful for monitoring your MCP usage. TRIGGERS:
'check my API key', 'API key status', 'how many calls have I made'
'my usage', 'rate limit status', 'key info'
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnlyHint, destructiveHint), while description adds valuable behavioral specifics about returned data fields (call count, rate limit, creation date) without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with action and scope, efficiently lists example triggers in a structured format. No redundant or wasted sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter introspection tool, description adequately covers intent and expected return fields. Lacks explicit output schema details, though annotations and description together provide sufficient context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, meeting the baseline expectation. Description appropriately focuses on the tool's output rather than non-existent inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Check' with clear resource 'API key status' and distinguishes effectively from culinary siblings by referencing 'MCP usage' and 'API key' introspection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit trigger phrases (e.g., 'check my API key', 'rate limit status') and context ('Useful for monitoring'), though it doesn't explicitly name alternatives from the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cooking_convert_unitsARead-onlyIdempotentInspect
Convert between cooking measurement units — cups, tablespoons, grams, ounces, milliliters, and more. TRIGGERS:
'how many grams in a cup', 'convert tablespoons to cups'
'ounces to grams', 'ml to cups', 'teaspoons in a tablespoon'
'metric to imperial', 'cooking conversions'
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Amount to convert (e.g. 2.5) | |
| toUnit | Yes | Unit to convert to | |
| fromUnit | Yes | Unit to convert from (e.g. 'cups', 'tbsp', 'oz', 'g', 'ml', 'tsp', 'lb', 'kg') | |
| ingredient | No | Optional ingredient for weight/volume conversions (e.g. 'flour', 'sugar', 'butter') — different ingredients have different densities |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and idempotentHint=true, establishing this is a safe calculation. The description adds valuable context about the optional 'ingredient' parameter affecting density for weight/volume conversions, which is behavioral nuance not captured in the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with a clear one-sentence purpose statement followed by bulleted trigger examples. No filler text; every line serves either definitional or pattern-matching purposes. Front-loaded with the core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter conversion tool with 100% schema coverage and comprehensive annotations, the description is complete. The trigger examples cover common conversion scenarios, and the density note for the ingredient parameter addresses the key complexity of cooking conversions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds significant value through the TRIGGERS section, which maps natural language queries to parameter usage patterns (e.g., 'grams in a cup' → fromUnit='cups', toUnit='grams'), helping the agent understand how to populate parameters from user intent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'Convert between cooking measurement units' with specific examples (cups, tablespoons, grams, ounces, milliliters). It clearly distinguishes from sibling tools like cooking_get_substitutions or recipes_scale by focusing purely on unit conversion rather than ingredient swaps or recipe scaling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides explicit patterns ('how many grams in a cup', 'convert tablespoons to cups') that signal when to invoke this tool. While it doesn't explicitly name alternatives, the trigger patterns effectively define the usage boundaries by showing specific query types this handles versus general cooking questions handled by siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cooking_get_substitutionsARead-onlyIdempotentInspect
Get ingredient substitutions and alternatives for dietary needs, allergies, or when you're missing an ingredient. TRIGGERS:
'substitute for X', 'what can I use instead of X'
'replace X in recipe', 'alternative for X'
'dairy-free substitute for X', 'vegan alternative for X'
'I don't have X, what can I use'
| Name | Required | Description | Default |
|---|---|---|---|
| ingredient | Yes | The ingredient you need a substitute for (e.g. 'fish sauce', 'butter', 'eggs', 'soy sauce') | |
| dietaryNeed | No | Specific dietary need for the substitution (e.g. 'vegan', 'dairy-free', 'gluten-free', 'nut-free', 'low-sodium') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations fully cover safety profile (readOnlyHint=true, destructiveHint=false, idempotentHint=true), so the description doesn't need to. The description adds domain context about handling allergies and dietary restrictions, but doesn't describe output format, pagination, or rate limits beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
First sentence front-loads the core purpose. The TRIGGERS section uses bullet points for scannable invocation patterns. Every element serves a function: the opening defines capability, the triggers define recognition patterns. No wasted words or redundant explanations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter lookup tool with read-only behavior and no output schema, the description is complete. It explains the functional scope (substitutions), the parameter intent (dietary vs. missing ingredient), and invocation triggers. No gaps remain given the tool's limited complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with both parameters well-documented (ingredient examples like 'fish sauce', dietaryNeed examples like 'vegan'). The description maps to these by mentioning 'dietary needs, allergies' and 'missing an ingredient', but doesn't add syntax, validation rules, or format details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] ingredient substitutions and alternatives' with specific contexts (dietary needs, allergies, missing ingredients). It distinguishes from siblings like `nutrition_healthier_swap` (health optimization) and `cooking_convert_units` (measurement conversion) by focusing on functional replacements for unavailable or restricted ingredients.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides explicit patterns for when to invoke the tool (e.g., 'substitute for X', 'vegan alternative for X', 'I don't have X'). This offers clear 'when' guidance, but lacks explicit 'when-not' warnings or named sibling alternatives (e.g., when to prefer `nutrition_healthier_swap` over this tool).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cooking_get_tipsARead-onlyIdempotentInspect
Get practical cooking tips and expert techniques for any topic. Covers ingredients, methods, equipment, and cuisines. TRIGGERS:
'how do I X', 'tips for X', 'cooking advice for X'
'best way to cook X', 'technique for X'
'help me with X', 'kitchen tips', 'cooking hack for X' TOPICS: knife skills, rice, broth, pho, grilling, baking, fermentation, wok cooking, seasoning, mise en place, sous vide, smoking, braising, and more.
| Name | Required | Description | Default |
|---|---|---|---|
| topic | Yes | Cooking topic — ingredient, technique, dish, or equipment (e.g. 'knife skills', 'making pho broth', 'fermentation', 'grilling', 'wok cooking', 'baking bread') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations confirm readOnly/idempotent/destructive=false profile. Description adds topical scope (what domains are covered) and trigger conditions, but does not disclose output format, length constraints, or whether tips are structured vs. free text. Since annotations carry safety context, this adds moderate behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded purpose statement followed by structured 'TRIGGERS' and 'TOPICS' sections. All content earns its place; no redundant fluff. Minor deduction for 'and more' vagueness in topics list, but overall efficient use of space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple single-parameter read tool. Covers invocation triggers, valid topic examples, and general scope. Absence of output schema description is acceptable given the open-ended nature of 'tips' content and lack of output schema definition.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter description including examples. Description reinforces this via the 'TOPICS' section listing specific examples (knife skills, pho, fermentation, etc.), but does not add semantic meaning beyond what the schema already provides. Baseline 3 appropriate for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb-resource pair 'Get practical cooking tips and expert techniques' and explicitly scopes coverage to 'ingredients, methods, equipment, and cuisines'. Clearly distinguishes from siblings like recipes_get (full recipes), cooking_get_substitutions (ingredient swaps), and cooking_convert_units (measurements).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes explicit 'TRIGGERS' section listing query patterns like 'how do I X', 'tips for X', and 'technique for X' that signal when to invoke this tool. While it doesn't explicitly name sibling tools to avoid, the trigger patterns effectively distinguish this from recipe search or nutrition analysis use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cooking_meal_prep_guideARead-onlyIdempotentInspect
Get a meal prep guide — how to batch cook, store, and reheat recipes for the week. Saves time and reduces food waste. TRIGGERS:
'how to meal prep X', 'batch cooking tips', 'meal prep guide'
'how to store leftovers', 'can I freeze X', 'how long does X last'
'Sunday meal prep', 'weekly prep plan'
| Name | Required | Description | Default |
|---|---|---|---|
| recipeSlug | Yes | Recipe slug to get meal prep tips for (e.g. 'chicken-pho') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish the operation is read-only, idempotent, and non-destructive. The description adds valuable behavioral context about the guide's content scope (batch cooking, storage, reheating) and practical benefits (saves time, reduces waste), though it omits details about the guide's format, length, or storage timeline limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately front-loaded with the core function first, followed by value proposition, then structured trigger examples. Every section serves a purpose—the triggers aid intent matching, and the benefits justify when to suggest the tool. No significant waste, though the benefits sentence is secondary to operational needs.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter, read-only operation) and comprehensive safety annotations, the description provides sufficient context. The absence of an output schema means the description doesn't need to explain return values, though mentioning the guide format (text, timeline, etc.) would improve completeness slightly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single `recipeSlug` parameter, the baseline is 3. The description implies the recipe context ('recipes for the week') but does not add parameter-specific semantics, examples, or validation rules beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a meal prep guide covering batch cooking, storage, and reheating. It implicitly distinguishes from siblings like `recipes_get` (which retrieves the recipe itself) and `cooking_get_tips` (general tips) by specifying the meal prep focus, though it doesn't explicitly name alternative tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides clear positive usage context with specific query patterns ('how to meal prep X', 'can I freeze X', 'Sunday meal prep') that indicate when to invoke the tool. While it lacks explicit 'when-not' exclusions, the trigger patterns offer concrete guidance for intent recognition.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cooking_pair_beveragesARead-onlyIdempotentInspect
Get wine, beer, cocktail, or non-alcoholic drink pairings for a recipe or cuisine. TRIGGERS:
'what wine goes with X', 'beer pairing for X', 'drink pairing'
'cocktail for dinner', 'non-alcoholic pairing', 'beverage suggestion'
'wine for pasta', 'what to drink with steak'
| Name | Required | Description | Default |
|---|---|---|---|
| dish | Yes | Dish or cuisine to pair beverages with (e.g. 'grilled steak', 'Thai curry', 'seafood pasta') | |
| beverageType | No | Type of beverage pairing | all |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, establishing this as a safe lookup operation. The description adds trigger examples but does not disclose additional behavioral traits like return format, typical result count, or whether results include tasting notes/rationale beyond what the annotations imply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately front-loaded with the core purpose statement followed by structured trigger examples. While the raw markdown formatting of the TRIGGERS section is slightly informal, every line serves a distinct purpose—either defining capability or guiding invocation patterns.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter schema with full annotation coverage (read-only, idempotent) and no nested objects, the description provides sufficient context for tool selection. However, the absence of an output schema and lack of description regarding return structure (e.g., whether it returns specific bottle recommendations or general categories) prevents a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the dish parameter including concrete examples ('grilled steak', 'Thai curry') and beverageType listing the enum values. The description reinforces these concepts but does not add significant semantic depth or usage constraints beyond what the schema already documents, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and clearly identifies the resource (beverage pairings) and scope (wine, beer, cocktail, non-alcoholic for recipes/cuisines). It effectively distinguishes from cooking siblings like cooking_get_tips or nutrition_analyze by focusing specifically on drink pairings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides explicit natural language patterns for when to invoke the tool (e.g., 'what wine goes with X', 'beer pairing for X'). While comprehensive for activation patterns, it lacks explicit guidance on when NOT to use this tool or named sibling alternatives to consider.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cuisines_exploreARead-onlyIdempotentInspect
List all available cuisines on SAVOR Dish with recipe counts. Discover world cuisines and how many recipes are available for each. TRIGGERS:
'what cuisines do you have', 'list all cuisines'
'available food categories', 'cuisine options'
'how many recipes do you have', 'recipe categories'
| Name | Required | Description | Default |
|---|---|---|---|
| minRecipes | No | Only show cuisines with at least this many recipes (default 1) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, establishing the safety profile. The description adds domain context ('SAVOR Dish') and clarifies the tool returns recipe counts, but does not disclose pagination behavior, response format, or what happens when no cuisines meet the minRecipes threshold.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core function in the first sentence. The TRIGGERS section is well-structured for quick scanning. The second sentence ('Discover world cuisines...') is slightly redundant with the first, preventing a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one optional parameter), rich annotations covering behavioral traits, and 100% schema coverage, the description provides adequate context. The inclusion of trigger phrases compensates somewhat for the missing output schema, though explicit description of the return structure (array of cuisine objects with counts) would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents the minRecipes parameter ('Only show cuisines with at least this many recipes'). The description mentions 'recipe counts' which loosely supports understanding the parameter's purpose, but does not add syntax details, validation rules, or usage examples beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'List[s] all available cuisines on SAVOR Dish with recipe counts,' providing a specific verb (List) and resource (cuisines). However, it does not explicitly distinguish from sibling tool 'recipes_list_by_cuisine,' which returns recipes for a specific cuisine rather than listing available cuisines.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides explicit example phrases ('what cuisines do you have', 'available food categories') that signal when to invoke this tool, offering clear contextual guidance. However, it lacks explicit guidance on when NOT to use this tool or which sibling tool to use instead (e.g., when the user wants actual recipes rather than a cuisine list).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
meals_planARead-onlyIdempotentInspect
Create a meal plan suggestion for a specified number of days with cuisine and dietary preferences. Returns breakfast, lunch, and dinner recommendations using real SAVOR Dish recipes. TRIGGERS:
'plan my meals for the week', 'meal plan for X days'
'what should I eat this week', 'weekly meal plan'
'meal prep for X days', 'plan dinners for the week'
'family meal plan', 'healthy meal plan', 'budget meal plan'
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | Number of days to plan meals for (1-14, default 7) | |
| mealsPerDay | No | Number of meals per day to plan (1-5, default 3 for breakfast/lunch/dinner) | |
| preferences | No | Dietary preferences, cuisine focus, or constraints (e.g. 'Mediterranean', 'low-carb', 'family-friendly', 'under 30 minutes', 'vegetarian') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds valuable behavioral context beyond these safety annotations by disclosing the content source ('real SAVOR Dish recipes') and the specific meal structure returned (breakfast, lunch, dinner), which helps the agent understand the response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately front-loaded with the core purpose statement followed by return value details. While the TRIGGERS section adds length, the formatted bullet list is scannable and the examples are highly relevant for an AI agent determining invocation patterns, justifying the additional space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately compensates by specifying the return contains 'breakfast, lunch, and dinner recommendations' and cites the 'SAVOR Dish' recipe source. With 100% input schema coverage and zero required parameters, the description provides sufficient context for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all three parameters. The description references these parameters ('specified number of days', 'dietary preferences') but adds minimal semantic detail beyond what the schema already provides, meeting the baseline expectation for high-coverage schemas without additional descriptive burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Create a meal plan suggestion for a specified number of days with cuisine and dietary preferences' provides a specific verb (Create) and clear resource scope. It distinguishes from siblings like 'nutrition_daily_plan' by specifying the use of 'real SAVOR Dish recipes' and the specific return structure of 'breakfast, lunch, and dinner recommendations'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides example invocation phrases ('plan my meals for the week', 'meal prep for X days') that imply usage context, but offers no explicit comparison to sibling alternatives like 'nutrition_daily_plan' or 'cooking_meal_prep_guide' to guide selection between similar meal planning tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nutrition_analyzeARead-onlyIdempotentInspect
Get a deep nutritional analysis for a recipe — full macros (protein, carbs, fat, fiber), micros (sodium, cholesterol, potassium, saturated fat, sugar), per-serving breakdown, and ingredient-level calorie contributions. TRIGGERS:
'nutrition breakdown for X', 'full nutrition facts for X'
'how many calories per serving in X', 'macro breakdown of X'
'is X high in protein', 'how much fiber in X'
'detailed nutrition analysis', 'calorie breakdown by ingredient'
| Name | Required | Description | Default |
|---|---|---|---|
| recipeSlug | Yes | Recipe slug to analyze (e.g. 'chicken-pho', 'caesar-salad') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare the operation as read-only, idempotent, and non-destructive. The description adds valuable behavioral context beyond these hints by specifying the exact nutritional data returned (protein, carbs, sodium, cholesterol, etc.) and the granularity (per-serving and ingredient-level), which helps the agent understand what data structure to expect.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with the core purpose front-loaded in the first sentence, followed by a dedicated TRIGGERS section. While the triggers are slightly verbose (four bullet points), they provide actionable query patterns. No sentences are wasted or tautological.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single required parameter, read-only operation) and lack of output schema, the description adequately compensates by detailing the specific nutritional fields returned (macros, micros, calorie breakdown). It provides sufficient context for an agent to understand the tool's utility without overwhelming detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'recipeSlug' parameter (including clear examples like 'chicken-pho'), the schema fully documents the input. The description does not add parameter-specific semantics, meeting the baseline expectation when the schema is self-sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific action ('Get a deep nutritional analysis') and enumerates exact outputs (macros, micros, per-serving breakdown, ingredient-level calories). The modifier 'deep' and 'ingredient-level' effectively distinguishes this from the sibling tool 'recipes_get_nutrition', indicating greater granularity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides explicit user query patterns ('nutrition breakdown for X', 'how many calories per serving') that signal when to invoke this tool. While it offers strong positive guidance, it lacks explicit exclusions or comparison to alternatives (e.g., when to use 'recipes_get_nutrition' instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nutrition_compareARead-onlyIdempotentInspect
Compare nutrition facts across 2-5 recipes side-by-side. Shows calories, protein, carbs, fat, fiber for each recipe in a comparison table format. TRIGGERS:
'compare nutrition of X and Y', 'which is healthier X or Y'
'calories in X vs Y', 'protein comparison X and Y'
'healthiest option between X Y Z'
'nutritional comparison', 'which has fewer calories'
| Name | Required | Description | Default |
|---|---|---|---|
| recipeSlugs | Yes | Array of 2-5 recipe slugs to compare nutritionally (e.g. ['chicken-pho', 'beef-pho', 'veggie-pho']) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnly/idempotent status, the description adds valuable output context: the specific macro nutrients returned and the 'comparison table format'. It does not contradict annotations and appropriately clarifies the closed-world scope (2-5 recipes).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with purpose front-loaded in the first sentence and output format in the second. The TRIGGERS section, while helpful for routing, adds length that prevents a perfect score for conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter tool with good annotations, the description is complete. It compensates for the missing output_schema by describing the comparison table format and specific nutritional fields returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents the recipeSlugs parameter including format examples ('chicken-pho'). The description provides no additional parameter semantics, which is acceptable given the schema's completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool compares 'nutrition facts across 2-5 recipes side-by-side' and lists specific metrics (calories, protein, carbs, fat, fiber), clearly distinguishing it from sibling tools like recipes_compare (general comparison) or nutrition_analyze (single recipe).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides explicit natural-language patterns for when to invoke this tool (e.g., 'compare nutrition of X and Y', 'which is healthier'). However, it lacks explicit mention of alternatives like nutrition_analyze for single-recipe analysis.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nutrition_daily_planARead-onlyIdempotentInspect
Build a single-day meal plan optimized for specific calorie and macro goals. Returns breakfast, lunch, dinner, and optional snack with combined nutrition totals hitting your targets. TRIGGERS:
'meal plan for 2000 calories', 'build me a 1500 calorie day'
'high protein meal plan', 'plan my meals for 150g protein'
'keto meal plan for today', 'low carb day plan'
'bodybuilding meal plan', 'cut diet plan', 'bulking meals'
'plan meals to hit my macros'
| Name | Required | Description | Default |
|---|---|---|---|
| cuisine | No | Preferred cuisine style (e.g. 'mediterranean', 'asian') | |
| maxCarbs | No | Maximum daily carbs in grams (e.g. 50 for keto, 150 for low-carb) | |
| minProtein | No | Minimum daily protein target in grams (e.g. 120 for athletic, 150 for bodybuilding) | |
| targetCalories | No | Target total daily calories (800-5000, default 2000) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context beyond these annotations by disclosing the output structure (breakfast, lunch, dinner, optional snack with combined totals), which is critical since no output schema exists. It does not mention rate limits or caching, but covers the essential return format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with the core purpose front-loaded in the first sentence, followed by explicit TRIGGERS that act as usage patterns. Every element earns its place—no redundant filler, yet comprehensive enough to guide invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 optional parameters with full schema coverage and no output schema, the description adequately compensates by explaining the return structure (meals + nutrition totals). It appropriately leverages the annotations to cover safety/idempotency without redundant text. Minor gap: could explicitly note that all parameters are optional for maximum flexibility.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has 100% description coverage (baseline 3), the TRIGGERS section adds significant semantic value by mapping real-world user intents to parameters (e.g., '2000 calories' → targetCalories, '150g protein' → minProtein, 'keto' → maxCarbs). This contextual bridging helps the AI agent correctly interpret vague user requests.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Build', 'Returns') and clearly identifies the resource (single-day meal plan) and optimization goals (calorie and macro targets). The 'single-day' scope effectively distinguishes it from sibling tool 'meals_plan' (which implies broader planning), while 'optimized for specific calorie and macro goals' differentiates it from general recipe search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides excellent example use cases showing when to invoke the tool (calorie-specific requests, macro targets, diet types like keto/bulking). However, it lacks explicit guidance on when NOT to use this tool versus siblings like 'meals_plan' (multi-day) or 'nutrition_find_by_macros' (finding existing foods vs building meals).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nutrition_find_by_macrosARead-onlyIdempotentInspect
Find recipes that match specific nutritional targets. Filter by max calories, minimum protein, max carbs, max fat — perfect for fitness goals, dieting, or health-conscious eating. TRIGGERS:
'high protein recipes', 'recipes under 400 calories'
'low carb meals', 'high fiber recipes'
'keto-friendly recipes by nutrition', 'recipes for muscle building'
'meals under 500 calories with at least 30g protein'
'find recipes for my diet', 'low calorie dinner options'
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum results to return (1-20, default 10) | |
| maxFat | No | Maximum grams of fat per serving (e.g. 15 for low-fat) | |
| cuisine | No | Optional cuisine filter (e.g. 'italian', 'asian') | |
| maxCarbs | No | Maximum grams of carbs per serving (e.g. 20 for low-carb/keto) | |
| minFiber | No | Minimum grams of fiber per serving (e.g. 5 for high-fiber) | |
| minProtein | No | Minimum grams of protein per serving (e.g. 25 for high-protein) | |
| maxCaloriesPerServing | No | Maximum calories per serving (e.g. 400 for a light meal, 600 for a moderate meal) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly/idempotent/destructive=false, so the safety profile is covered. The description adds valuable use-case context ('perfect for fitness goals, dieting') but does not disclose edge-case behavior (e.g., what happens when zero parameters are provided since all 7 are optional) or rate limiting concerns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core function, followed by filter categories and use-case context. The TRIGGERS section, while lengthy, earns its place by providing concrete query patterns. No redundant or wasted sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 optional parameters and no output schema, the description adequately covers input intent and use cases but omits output structure details (what recipe fields are returned) and behavior when invoked without parameters. Sufficient but not comprehensive for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds semantic value by explicitly connecting the filter parameters to real-world goals ('max calories... for a light meal', 'minimum protein... for high-protein') and providing trigger examples that illustrate parameter combinations (e.g., 'meals under 500 calories with at least 30g protein').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Find recipes that match specific nutritional targets' with specific verb (find), resource (recipes), and method (nutritional targets/macros). It clearly distinguishes from sibling tools like 'recipes_find_by_ingredient' (which searches by ingredients) by focusing on macro-based filtering.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The extensive TRIGGERS section provides concrete invocation patterns ('high protein recipes', 'recipes under 400 calories') that clearly signal when to use this tool versus general search or ingredient-based tools. However, it lacks explicit 'when not to use' guidance or named alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nutrition_healthier_swapARead-onlyIdempotentInspect
Get AI-powered suggestions to make a recipe healthier — ingredient swaps, cooking technique changes, and portion adjustments with estimated calorie savings. TRIGGERS:
'make X healthier', 'healthier version of X'
'reduce calories in X', 'lower fat version of X'
'how to make X lighter', 'healthy swap for X recipe'
'healthify this recipe', 'clean eating version of X'
| Name | Required | Description | Default |
|---|---|---|---|
| goal | No | Health optimization goal: lower-calorie, higher-protein, lower-carb, lower-fat, higher-fiber, or general | general |
| recipeSlug | Yes | Recipe slug to get healthier suggestions for (e.g. 'mac-and-cheese', 'fried-chicken') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations confirm the tool is read-only and non-destructive, the description adds valuable behavioral context: it specifies that suggestions include estimated calorie savings and covers three specific modification types (ingredient swaps, cooking techniques, and portions). This disclosure exceeds what the structured annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently front-loaded with a clear action statement followed by an em-dash listing specific capabilities. The TRIGGERS section is structured as a clean bulleted list that earns its place by providing concrete invocation patterns without unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only two parameters with complete schema coverage and no output schema, the description adequately explains the conceptual return value (AI suggestions with calorie savings). It appropriately leverages the annotations for safety context while detailing the suggestion types, making it complete for its complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents both parameters (recipeSlug with examples and goal with complete enum values). The description does not add parameter-specific syntax or semantics beyond what the schema provides, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] AI-powered suggestions to make a recipe healthier' with specific details about ingredient swaps, cooking techniques, and portion adjustments. It effectively distinguishes from siblings like cooking_get_substitutions (general substitutions) by emphasizing health optimization and calorie savings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides explicit phrases indicating when to use the tool (e.g., 'make X healthier', 'reduce calories in X'). However, it lacks explicit guidance on when NOT to use it or mentions of alternatives like cooking_get_substitutions for non-health-related substitutions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nutrition_ingredient_infoARead-onlyIdempotentInspect
Get nutrition facts, health benefits, and culinary uses for any cooking ingredient. Comprehensive reference for common cooking ingredients. TRIGGERS:
'nutrition in chicken breast', 'is avocado healthy'
'calories in rice', 'protein in salmon'
'health benefits of turmeric', 'facts about olive oil'
'what is nutritional yeast', 'info about quinoa'
| Name | Required | Description | Default |
|---|---|---|---|
| ingredient | Yes | Ingredient name to look up (e.g. 'chicken breast', 'avocado', 'quinoa', 'salmon', 'olive oil') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations establish the operation is read-only and idempotent, the description adds valuable behavioral context by specifying the three categories of returned data: 'nutrition facts, health benefits, and culinary uses'. This disclosure about the comprehensiveness of the reference (beyond just nutritional data) helps set appropriate expectations for output richness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately front-loaded with the core purpose in the first sentence, followed by scope clarification ('Comprehensive reference'). The TRIGGERS section, while lengthy with eight examples, efficiently communicates usage patterns through concrete query samples. No sentences appear wasted, though the list format consumes more tokens than strictly necessary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter lookup tool with robust annotations, the description adequately compensates for the missing output schema by enumerating the three information categories returned (nutrition, health benefits, culinary uses). Given the read-only nature and closed-world hint, the description provides sufficient context without needing to detail error states or pagination.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input parameter is already well-documented with clear examples ('chicken breast', 'avocado', etc.). The description does not add significant semantic meaning beyond what the schema provides, serving primarily to confirm the parameter represents a 'cooking ingredient' rather than adding format constraints or validation rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'nutrition facts, health benefits, and culinary uses' for cooking ingredients, specifying both the action and resource. However, it lacks explicit differentiation from sibling tools like 'nutrition_analyze' or 'nutrition_compare', leaving potential ambiguity about whether to use this for single ingredients versus meals or comparisons.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides eight specific example queries (e.g., 'nutrition in chicken breast', 'is avocado healthy') which help imply usage patterns through concrete examples. However, it lacks explicit guidance on when NOT to use this tool versus alternatives like 'nutrition_analyze' or 'recipes_get_nutrition', and omits any prerequisites or constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nutrition_meal_scoreARead-onlyIdempotentInspect
Rate the nutritional quality of a meal plan or set of recipes on a 0-100 health score. Evaluates balance of protein, fiber, calorie density, and variety. TRIGGERS:
'rate my meal plan', 'how healthy is my meal selection'
'score these recipes nutritionally', 'nutrition grade for these meals'
'is this a balanced meal plan', 'health check my recipes'
| Name | Required | Description | Default |
|---|---|---|---|
| recipeSlugs | Yes | Array of recipe slugs to score as a meal or plan (e.g. ['oatmeal-bowl', 'chicken-salad', 'salmon-dinner']) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint and idempotentHint, so the description appropriately focuses on adding scoring methodology details: the '0-100' scale and specific evaluation criteria ('protein, fiber, calorie density, and variety'). This adds valuable behavioral context beyond the safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with purpose first, evaluation criteria second, and trigger patterns third. Every sentence conveys distinct information (scoring action, specific metrics, invocation triggers) with no redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter and simple read-only operation, the description is appropriately complete. It compensates for the missing output schema by specifying the '0-100' return format. Minor gap: doesn't specify behavior for invalid recipe slugs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage including examples (['oatmeal-bowl', 'chicken-salad']), the schema fully documents the recipeSlugs parameter. The description references 'meal plan or set of recipes' which aligns with the parameter but doesn't add semantic meaning beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Rate[s] the nutritional quality' (specific verb) of 'a meal plan or set of recipes' (resource) on a '0-100 health score' (distinguishing output format). This clearly differentiates it from siblings like nutrition_analyze or nutrition_compare by specifying the singular scoring output.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides explicit phrases indicating when to use the tool (e.g., 'rate my meal plan', 'health check my recipes'). While it doesn't explicitly name alternative tools to use instead, the trigger patterns provide clear contextual guidance for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
platform_get_infoARead-onlyIdempotentInspect
Get comprehensive information about the SAVOR Dish platform. Returns features, pricing tiers, supported platforms, and Instacart integration details. TRIGGERS:
'what is SAVOR Dish', 'tell me about this app', 'platform info'
'what features do you have', 'pricing plans', 'how much does it cost'
'what platforms are supported', 'do you have a mobile app'
| Name | Required | Description | Default |
|---|---|---|---|
| section | No | Which section to return: 'all', 'features', 'pricing', 'platforms', or 'integrations' | all |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnlyHint, destructiveHint, idempotentHint), so the description's burden is lighter. It adds value by disclosing what content is returned (features, pricing, etc.) but doesn't elaborate on operational traits like caching, authentication requirements, or rate limits that aren't covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence. The structured TRIGGERS section, while slightly unconventional format-wise, efficiently communicates invocation patterns without excessive verbosity. Every line serves a distinct purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single optional parameter and rich annotations, the description adequately covers the tool's scope. It explains what information is returned despite lacking an output schema, though it could briefly mention the response format (structured data vs prose).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description enhances this by mapping the abstract 'section' parameter to concrete content domains (pricing tiers, supported platforms, Instacart integration), providing semantic context for why a user might choose specific enum values over 'all'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Get[s] comprehensive information about the SAVOR Dish platform' and lists specific resource domains covered (features, pricing tiers, supported platforms, Instacart integration). It clearly distinguishes from cooking/recipe/nutrition siblings by focusing on platform metadata rather than food content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides explicit example queries ('what is SAVOR Dish', 'pricing plans', 'what platforms are supported') that clearly indicate when to invoke the tool. While it doesn't explicitly state when NOT to use it or name alternatives, the examples effectively bound the scope against sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recipes_collectionsARead-onlyIdempotentInspect
Browse curated recipe collections by theme — date night, comfort food, quick weeknight, party appetizers, and more. TRIGGERS:
'date night recipes', 'comfort food ideas', 'party appetizers'
'meal prep recipes', 'one pot meals', 'kid-friendly recipes'
'romantic dinner', 'game day food', 'brunch ideas'
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of recipes to return | |
| theme | Yes | Collection theme to browse |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false, covering safety profile. The description adds that collections are 'curated' (implying editorial selection vs algorithmic), but does not disclose return format, pagination behavior beyond the limit param, or whether recipes include full details or just summaries.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is front-loaded with purpose statement followed by structured TRIGGERS list. Every element earns its place—no redundant filler, appropriately sized for the tool's complexity, and uses formatting (dashes, line breaks) effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter schema with 100% coverage and strong annotations, the description adequately covers tool purpose and selection criteria. Minor gap: lacks brief mention of return structure (recipe objects vs IDs) since no output schema exists, though this is partially mitigated by sibling tool patterns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100%, the description enriches the 'theme' parameter by mapping natural language examples ('date night', 'comfort food', 'party appetizers') to the enum values, helping the agent understand the semantic intent behind each theme option.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Browse' with resource 'curated recipe collections' and clearly distinguishes from siblings by emphasizing 'by theme' (date night, comfort food, etc.), differentiating it from recipes_search (general), recipes_find_by_ingredient (ingredient-based), and cuisines_explore (cuisine-based).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides explicit when-to-use guidance with 9 specific natural language phrases ('date night recipes', 'comfort food ideas', etc.) that should invoke this tool, making it crystal clear when to select this over alternative recipe tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recipes_compareARead-onlyIdempotentInspect
Compare two or more recipes side-by-side. Shows prep time, cook time, servings, difficulty, and ingredients for easy comparison. TRIGGERS:
'compare X and Y recipes', 'which is easier X or Y'
'difference between X and Y', 'X vs Y recipe'
'compare these recipes', 'which recipe is faster'
| Name | Required | Description | Default |
|---|---|---|---|
| recipeSlugs | Yes | Array of 2-5 recipe slugs to compare (e.g. ['chicken-pho', 'beef-pho']) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/destructive hints, so the description appropriately focuses on adding output context: it specifies exactly which fields are compared (prep time, cook time, servings, difficulty, ingredients). This disclosure of comparison dimensions adds meaningful behavioral detail beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a single clear functional sentence followed by a well-organized TRIGGERS bullet list. Every sentence serves a distinct purpose (capability statement vs. invocation patterns) with no redundancy or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter read tool without output schema, the description adequately covers what the agent needs to know: the comparison dimensions and trigger contexts. It appropriately omits redundant schema constraints (2-5 items) that are already well-documented in the input schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for recipeSlugs (including the 2-5 item constraint and example format), the schema carries the full semantic load. The description adds no parameter-specific details, which aligns with the baseline score of 3 for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Compare two or more recipes side-by-side' with specific comparison dimensions (prep time, cook time, servings, difficulty, ingredients). This distinguishes it from siblings like nutrition_compare (nutritional focus) and recipes_get (single recipe retrieval).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides explicit example phrases ('compare X and Y recipes', 'which is easier X or Y') that help the agent recognize invocation contexts. However, it lacks explicit guidance on when to use nutrition_compare instead for nutritional comparisons versus recipe metadata comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recipes_find_by_ingredientARead-onlyIdempotentInspect
Find recipes that use specific ingredients you have on hand. Great for reducing food waste and using up what's in your fridge or pantry. TRIGGERS:
'what can I make with X and Y', 'recipes with chicken and rice'
'I have X, what can I cook', 'use up my X'
'fridge clean out recipes', 'what to make with leftovers'
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results (1-50, default 10) | |
| matchAll | No | If true, only return recipes containing ALL listed ingredients. If false (default), return recipes matching ANY ingredient. | |
| ingredients | Yes | Array of ingredients to search for (e.g. ['chicken', 'rice', 'garlic']) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, covering the safety profile. The description adds valuable use-case context ('reducing food waste', 'use up what's in your fridge') but does not disclose pagination behavior, result format, or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured: purpose statement, value proposition, then explicit trigger patterns. Every sentence earns its place. The TRIGGERS formatting as a list aids LLM pattern matching without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich annotations and complete schema coverage, the description provides sufficient context for tool selection. It appropriately emphasizes the food-waste reduction value proposition. A minor gap is the lack of return value description, though this is less critical for a straightforward search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents the ingredients array (with examples), limit constraints, and matchAll logic. The description implies the ingredients parameter through 'specific ingredients' but adds no additional semantic detail beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Find') + resource ('recipes') + mechanism ('use specific ingredients you have on hand'). It distinguishes from sibling tools like recipes_search by emphasizing the 'fridge/pantry' and 'leftovers' use case, making the scope distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides excellent query-pattern examples ('what can I make with X', 'fridge clean out') that implicitly signal when to use this tool. However, it lacks explicit exclusions like 'when you don't know the specific ingredients, use recipes_search instead'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recipes_getARead-onlyIdempotentInspect
Get full details of a specific recipe including ingredients, step-by-step instructions, nutrition, tips, and photo. TRIGGERS:
'show me the recipe for X', 'get recipe X', 'recipe details for X'
'how do I make X', 'ingredients for X', 'instructions for X'
'what's in X', 'nutrition info for X'
| Name | Required | Description | Default |
|---|---|---|---|
| slugOrId | Yes | Recipe URL slug (e.g. 'easy-mediterranean-chickpea-salad') or UUID identifier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/destructive hints, so the description appropriately focuses on return value disclosure, listing exactly what data fields are included (ingredients, instructions, nutrition, tips, photo). No contradictions with annotations. Could mention error handling for invalid slugs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
First sentence is perfectly front-loaded with value. The TRIGGERS section is structured and useful but slightly verbose; the hyphenated list format is readable. No tautology or wasted words in the core definition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description compensates well by enumerating the specific data returned (ingredients through photos). For a single-parameter read-only operation with strong annotations, this provides sufficient context for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for the slugOrId parameter, including format examples (URL slug vs UUID). With schema carrying full semantic load, the description correctly focuses on behavior rather than parameter documentation. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity with verb 'Get' + resource 'recipe' + comprehensive scope listing 'ingredients, step-by-step instructions, nutrition, tips, and photo'. The phrase 'full details' effectively distinguishes this from siblings like recipes_get_quick and recipes_get_nutrition.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides clear natural language patterns for invocation (e.g., 'show me the recipe for X'), but lacks explicit guidance on when NOT to use this versus alternatives like recipes_search (when ID unknown) or recipes_get_quick (when summary suffices).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recipes_get_dietaryARead-onlyIdempotentInspect
Find recipes matching specific dietary requirements. Filters by tags, categories, and recipe metadata for dietary needs. TRIGGERS:
'vegan recipes', 'gluten-free meals', 'keto options'
'vegetarian dinner', 'dairy-free recipes', 'paleo meals'
'low-carb options', 'whole30 recipes', 'nut-free dishes' DIETARY OPTIONS: vegan, vegetarian, gluten-free, dairy-free, keto, paleo, low-carb, whole30, nut-free, soy-free, egg-free, pescatarian, halal, kosher
| Name | Required | Description | Default |
|---|---|---|---|
| diet | Yes | Dietary requirement to filter by (e.g. 'vegan', 'gluten-free', 'keto', 'dairy-free', 'paleo', 'low-carb', 'vegetarian') | |
| limit | No | Maximum number of results (1-50, default 10) | |
| cuisine | No | Optional cuisine filter to combine with dietary requirement (e.g. 'italian', 'thai') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as read-only and safe (readOnlyHint=true, destructiveHint=false). The description adds valuable behavioral context by explaining the filtering mechanism ('Filters by tags, categories, and recipe metadata'), which helps the agent understand how the dietary matching works beyond just the parameter name.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with the main purpose front-loaded, followed by specific TRIGGERS and DIETARY OPTIONS sections. While the bulleted lists are somewhat verbose, they serve as scannable reference material that improves usability for the LLM without redundant prose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only 3 parameters with 100% schema coverage and no output schema, the description is appropriately complete. It provides the comprehensive list of supported dietary values and usage triggers that fully prepare the agent to invoke the tool correctly, though it could mention behavior when no matches are found.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage, the description adds crucial value by enumerating valid dietary options (vegan, vegetarian, gluten-free, etc.) that the schema only describes as generic strings without enum constraints. This compensates for the lack of enum validation in the schema and clarifies expected values for the 'diet' parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Find recipes matching specific dietary requirements' with specific verb (Find), resource (recipes), and scope (dietary requirements). It effectively distinguishes from siblings like recipes_search and recipes_find_by_ingredient by emphasizing dietary filtering via tags, categories, and metadata specifically for dietary needs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides excellent concrete examples of when to invoke this tool (e.g., 'vegan recipes', 'gluten-free meals'). While it lacks explicit 'when not to use' language or named alternatives, the specific dietary focus provides clear implicit guidance that this should be used for dietary restrictions rather than general recipe searches.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recipes_get_nutritionARead-onlyIdempotentInspect
Get detailed nutrition information for a specific recipe. Returns calories, macros, and dietary labels. TRIGGERS:
'nutrition for X', 'calories in X', 'how healthy is X'
'macros for X', 'carbs in X recipe', 'protein in X'
'is X healthy', 'dietary info for X', 'nutrition facts'
| Name | Required | Description | Default |
|---|---|---|---|
| recipeSlug | Yes | Recipe slug to get nutrition info for (e.g. 'chicken-pho', 'caesar-salad') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the annotations by specifying the return payload structure (calories, macros, and dietary labels). This compensates for the missing output schema. The TRIGGERS section also adds invocation pattern transparency. It does not contradict the readOnly/idempotent annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with the core purpose front-loaded in the first sentence, followed by return value disclosure. The TRIGGERS section, while extensive, provides actionable invocation patterns. No sentences appear redundant or wasteful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter read operation with rich annotations, the description is sufficiently complete. It compensates for the lack of an output schema by listing the specific data fields returned. It could improve by mentioning error behavior (e.g., invalid recipe slug), but this is not critical for the complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents the recipeSlug parameter including format examples ('chicken-pho'). The description references 'a specific recipe' which aligns with the parameter semantics but does not add additional syntactic guidance or validation rules beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] detailed nutrition information for a specific recipe' with specific outputs (calories, macros, dietary labels). The phrase 'for a specific recipe' effectively distinguishes it from general nutrition analysis tools (like nutrition_analyze) and basic recipe retrieval (recipes_get).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides implicit usage guidance through example queries ('nutrition for X', 'calories in X'), helping identify invocation patterns. However, it lacks explicit when-not-to-use guidance or comparisons to siblings like nutrition_analyze or recipes_get_dietary, which could be confused with this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recipes_get_quickARead-onlyIdempotentInspect
Find quick recipes that can be made within a specified time limit. Perfect for busy weeknights or quick meals. TRIGGERS:
'quick recipes', 'fast meals', '15 minute recipes'
'easy weeknight dinner', 'recipes under 30 minutes'
'fast lunch ideas', 'quick and easy meals'
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results (1-20, default 10) | |
| mealType | No | Optional meal type filter: breakfast, lunch, dinner, snack, dessert | |
| maxMinutes | No | Maximum total cooking time in minutes (5-120, default 30) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, covering the safety profile. The description adds valuable behavioral context about the time-based filtering logic ('specified time limit'), but does not disclose rate limits, caching behavior, or what happens when no recipes match the criteria.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear functional statement front-loaded, followed by a structured TRIGGERS section. Every element serves a purpose — no redundant or tautological text. The formatting with clear bullet points enhances scannability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a straightforward read-only search tool with complete input schema coverage and annotations, the description provides sufficient context for invocation. However, without an output schema, it could briefly mention that it returns a list of recipes to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all three parameters (limit, mealType, maxMinutes). The description mentions 'time limit' which conceptually maps to maxMinutes but does not add semantic details beyond what the schema already provides, meriting the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Find[s] quick recipes that can be made within a specified time limit' — a specific verb (find), resource (recipes), and scope (time-constrained). It effectively distinguishes itself from siblings like recipes_search or recipes_get by emphasizing the quick/fast temporal aspect.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section explicitly lists natural language patterns ('quick recipes', '15 minute recipes', 'easy weeknight dinner') that indicate when to use this tool, providing excellent contextual guidance. However, it lacks explicit mentions of alternatives (e.g., 'use recipes_search for non-time-bound queries') or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recipes_get_randomARead-onlyIdempotentInspect
Get a random recipe suggestion. Perfect for when you can't decide what to cook. Optionally filter by cuisine or meal type. TRIGGERS:
'surprise me', 'random recipe', 'I can't decide what to cook'
'pick a recipe for me', 'what should I cook tonight'
'random dinner idea', 'suggest something to make'
| Name | Required | Description | Default |
|---|---|---|---|
| cuisine | No | Optional cuisine filter (e.g. 'italian', 'vietnamese') | |
| mealType | No | Optional meal type filter: breakfast, lunch, dinner, snack, dessert |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/destructive hints, so description focuses on adding the critical 'random' behavioral trait and trigger conditions. Does not disclose what happens when filters match no recipes (empty result? error?) or rate limiting, leaving behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose front-loaded in first sentence, usage context second, parameters third. TRIGGERS section is verbose but provides valuable signal for LLM routing. Minor deduction for repetitive trigger phrasing that could be condensed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 2-parameter optional tool with good annotations, but lacks mention of return value structure (no output schema exists) or error conditions. Should indicate whether it returns a full recipe object or just an ID/title.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete descriptions for both optional parameters. Description confirms optionality ('Optionally filter by...') but adds no additional semantic context (e.g., format details, interaction between filters) beyond what the schema already provides. Baseline 3 appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb-resource pair ('Get a random recipe suggestion') and distinguishes clearly from siblings via the 'random' qualifier, contrasting with recipes_search, recipes_get, and recipes_find_by_ingredient which imply targeted retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context ('Perfect for when you can't decide what to cook') and extensive trigger phrase examples under TRIGGERS section. Lacks explicit 'when not to use' or named alternatives, but the randomness emphasis effectively signals when to prefer this over specific query tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recipes_get_trendingARead-onlyIdempotentInspect
Get the most popular and recently published recipes on SAVOR Dish. Returns trending recipes with photos, ordered by newest first. TRIGGERS:
'what's trending', 'popular recipes', 'top recipes'
'what's new', 'latest recipes', 'most popular dishes'
'show me trending food', 'best recipes right now'
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of trending recipes to return (1-20, default 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context not in annotations: it specifies results include 'photos' and are 'ordered by newest first.' However, it omits other behavioral details like pagination behavior, rate limits, or cache implications, so it meets but does not exceed expectations for annotated tools.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately front-loaded with the core action and scope in the first sentence. The TRIGGERS section, while formatted informally (markdown-style list), efficiently communicates invocation patterns without excessive verbosity. No sentences appear redundant.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no output schema, the description adequately compensates by stating it returns 'trending recipes with photos, ordered by newest first.' Combined with the single well-documented parameter and comprehensive safety annotations (read-only, non-destructive), the description provides sufficient context for an agent to invoke and handle results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline score is 3. The description does not mention the 'limit' parameter or provide syntax examples beyond what the schema already documents ('Maximum number of trending recipes to return'), so it neither adds nor detracts from schema clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] the most popular and recently published recipes on SAVOR Dish' with specific verbs and resources. It distinguishes from sibling tools like recipes_search or recipes_get by emphasizing 'trending,' 'popular,' and 'newest first' ordering, clarifying this is a discovery feed rather than a lookup or search tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides implied usage context by listing user utterances that should invoke this tool ('what's trending', 'latest recipes'), but lacks explicit guidance on when NOT to use it or which sibling tools (e.g., recipes_search vs. recipes_get_trending) are appropriate alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recipes_list_by_cuisineARead-onlyIdempotentInspect
Browse recipes filtered by world cuisine type. Returns public recipes from a specific culinary tradition. TRIGGERS:
'show me Vietnamese recipes', 'Italian food', 'Mexican dishes'
'what Thai recipes do you have', 'Japanese cooking'
'explore Indian cuisine', 'Korean food options' SUPPORTED CUISINES: Vietnamese, Italian, Mexican, Thai, Indian, Japanese, Korean, French, Chinese, Mediterranean, American, Greek, Spanish, Middle Eastern, Ethiopian, Caribbean, and more.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return (1-50, default 10) | |
| cuisine | Yes | Cuisine type to filter by (e.g. 'vietnamese', 'italian', 'mexican', 'thai', 'indian', 'japanese', 'korean', 'french', 'chinese', 'mediterranean') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, non-destructive, idempotent behavior. The description adds valuable context that recipes are 'public' (not user-specific) and provides usage patterns via the TRIGGERS section, enriching the agent's understanding beyond the structured hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear section headers (TRIGGERS, SUPPORTED CUISINES) and front-loaded purpose statement. Slightly verbose due to extensive example lists, but every section serves a distinct purpose for agent comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a simple 2-parameter tool with rich annotations. Covers data scope ('public recipes'), usage patterns, and supported values. Lacks only minor details like error handling for unsupported cuisines or pagination behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value by listing 17+ supported cuisines in the SUPPORTED CUISINES section, extending beyond the schema's examples and reinforcing valid inputs for the 'cuisine' parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb-resource combination ('Browse recipes filtered by world cuisine type') and scope ('public recipes from a specific culinary tradition'). Effectively distinguishes from siblings like 'recipes_search' by emphasizing cuisine-based filtering, though it doesn't explicitly name alternative tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides clear contextual examples of when to invoke ('show me Vietnamese recipes', 'Italian food'), helping the agent recognize user intent. However, it lacks explicit guidance on when NOT to use this tool versus siblings like 'cuisines_explore' or 'recipes_search'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recipes_scaleARead-onlyIdempotentInspect
Scale a recipe for different serving sizes. Returns the original ingredients with a scaling multiplier to adjust quantities. TRIGGERS:
'scale recipe X for 8 people', 'double the recipe for X'
'halve the recipe', 'adjust servings for X'
'how much for X servings', 'recipe for a crowd'
| Name | Required | Description | Default |
|---|---|---|---|
| recipeSlug | Yes | Recipe slug to scale (e.g. 'chicken-pho') | |
| targetServings | Yes | Desired number of servings (1-100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true and idempotentHint=true, the description adds valuable behavioral context: 'Returns the original ingredients with a scaling multiplier to adjust quantities.' This clarifies the return format since no output schema is provided, without contradicting the safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately front-loaded with the core purpose ('Scale a recipe...'), followed by return value details, then the TRIGGERS section. Every sentence earns its place; the trigger examples are efficient and actionable without excessive verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no nested objects) and rich annotations, the description is sufficiently complete. It compensates for the missing output schema by describing the return value ('original ingredients with a scaling multiplier'), providing enough context for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both recipeSlug and targetServings have complete descriptions with examples and ranges). The description does not add semantic details beyond what the schema already provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb+resource combination ('Scale a recipe') that clearly distinguishes it from siblings like 'recipes_get' or 'recipes_search'. It further specifies the scope ('for different serving sizes') and return behavior, making the exact functionality unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'TRIGGERS:' section provides explicit example utterances ('scale recipe X for 8 people', 'halve the recipe', etc.) that clearly signal when to invoke this tool. However, it lacks explicit 'when-not-to-use' guidance or named alternatives (e.g., distinguishing from simply retrieving a recipe without scaling).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recipes_searchARead-onlyIdempotentInspect
Search and filter recipes with advanced options. Handles keyword, ingredient, cuisine, meal type, and dietary filters in ONE call. TRIGGERS:
SEARCH: 'find recipes with X', 'search for X', 'recipes with X'
INGREDIENT: 'what can I make with X', 'recipes using X'
DIETARY: 'find vegan X', 'gluten-free recipes', 'keto meals'
MEAL TYPE: 'breakfast ideas', 'dinner recipes', 'lunch options'
QUICK: 'easy recipes', 'quick meals', '30 minute recipes'
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return (1-50, default 10) | |
| query | Yes | Search query — keyword, ingredient, dish name, cuisine, or dietary preference (e.g. 'chicken pho', 'vegan tacos', 'gluten-free dessert', '30 minute dinner') | |
| mealType | No | Filter by meal type: breakfast, lunch, dinner, snack, dessert, appetizer, side, or drink |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description is consistent with annotations (read-only, non-destructive) and adds the context that this tool handles multiple filter types 'in ONE call.' However, it does not disclose additional behavioral traits like result ordering, pagination behavior beyond the limit parameter, or what constitutes a match (exact vs. fuzzy).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear purpose statement followed by a categorized TRIGGERS section. It is appropriately front-loaded, though the extensive trigger examples (5 bullet points) are slightly verbose relative to the information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately covers input capabilities but fails to describe what data is returned (e.g., recipe objects with titles, IDs, images). Additionally, in a crowded tool ecosystem with 5+ specialized recipe siblings, the lack of explicit selection guidance leaves a notable gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage, the description adds valuable semantic context that the `query` parameter handles multiple semantic categories (keyword, ingredient, cuisine, dietary preference) within a single string, which helps the agent understand how to construct queries despite the lack of separate filter parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches and filters recipes using keywords, ingredients, cuisines, meal types, and dietary preferences. It specifies the comprehensive scope ('in ONE call'), but lacks explicit differentiation from specialized siblings like `recipes_find_by_ingredient` or `recipes_get_quick` despite significant functional overlap.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides explicit invocation patterns (SEARCH, INGREDIENT, DIETARY, etc.) that clearly signal when to use this tool based on user intent. However, it omits guidance on when to prefer specialized alternatives (e.g., `recipes_find_by_ingredient` for pure ingredient searches) or when this general tool is preferable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recipes_seasonalCRead-onlyIdempotentInspect
Find recipes perfect for a specific season or month. Returns seasonal dishes with fresh, in-season ingredients. TRIGGERS:
'what's good to cook in summer', 'fall recipes', 'winter comfort food'
'seasonal dishes for March', 'spring recipes'
'holiday recipes', 'Thanksgiving ideas', 'Christmas dinner'
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of recipes to return | |
| season | Yes | Season to find recipes for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare the operation as read-only, non-destructive, and idempotent. The description adds that it 'Returns seasonal dishes with fresh, in-season ingredients,' which provides context about the result set not found in annotations. It doesn't disclose rate limits or caching behavior, but this is acceptable given the safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately front-loaded with the core purpose, followed by behavioral context, and structured TRIGGERS examples. While the triggers section is slightly verbose, each sentence serves a purpose in illustrating usage patterns.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter tool with full schema coverage, the description should accurately reflect parameter constraints. It fails by implying broader temporal granularity (months/holidays) than the schema supports, leaving agents potentially confused about how to map user queries to the strict seasonal enum.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema description coverage (baseline 3), the description loses points for implying the 'season' parameter accepts month values ('specific season or month'), when the schema strictly limits it to four seasonal enums. This creates semantic confusion about what values are valid for the required parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool finds recipes for a 'specific season or month,' but the input schema only accepts four enum values (spring, summer, fall, winter). The mention of 'month' creates a scope mismatch, as the tool cannot actually handle month-specific queries like 'March' mentioned in the triggers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the TRIGGERS section provides example user queries, it misleadingly suggests capabilities the tool doesn't support (month-specific and holiday-specific queries like 'March' or 'Thanksgiving'). It fails to clarify that users must map months/holidays to seasons manually, and doesn't mention alternatives like `recipes_search` for non-seasonal queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shopping_generate_grocery_listARead-onlyIdempotentInspect
Generate a grocery shopping list from recipe ingredients. Collects ingredients across multiple recipes and provides an Instacart delivery link for same-day ordering. TRIGGERS:
'make a grocery list for X', 'shopping list for X recipe'
'what do I need to buy for X', 'ingredients I need to buy'
'generate shopping list', 'create a grocery list from these recipes' CATEGORIES: produce, dairy, meat, seafood, deli, bakery, frozen, pantry, snacks, beverages, household, other
| Name | Required | Description | Default |
|---|---|---|---|
| recipeSlugs | Yes | Array of recipe slugs to generate the grocery list from (e.g. ['chicken-pho', 'banh-mi-sandwich']) | |
| servingsMultiplier | No | Multiply all ingredient quantities by this factor (e.g. 2 for double portions, 0.5 for half) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the annotations (readOnly, idempotent, non-destructive), the description adds valuable behavioral context: it discloses the aggregation logic ('Collects ingredients across multiple recipes'), external integration ('provides an Instacart delivery link'), temporal characteristic ('same-day ordering'), and output organization (CATEGORIES list).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with the core purpose front-loaded in the first sentence. The TRIGGERS and CATEGORIES sections, while adding length, are structured metadata that earn their place by aiding invocation recognition and output comprehension. No wasted sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description adequately covers output characteristics by listing the CATEGORIES (organization structure) and mentioning the Instacart delivery link. Combined with annotations covering safety/idempotency, the description provides sufficient context for a 2-parameter tool with moderate complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both recipeSlugs and servingsMultiplier including examples. The description implies multi-recipe support ('across multiple recipes') but doesn't add syntax or format details beyond the schema. Baseline 3 is appropriate given the schema carries the semantic burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb ('Generate'), resource ('grocery shopping list'), and source ('from recipe ingredients'). It clearly distinguishes from siblings like 'shopping_instacart' by specifying this tool handles the ingredient aggregation and list creation, while also mentioning the Instacart link as an output feature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides explicit natural language patterns indicating when to invoke the tool ('make a grocery list for X', 'shopping list for X recipe'). While it lacks explicit 'when not to use' exclusions or named alternatives, the trigger examples effectively establish the usage context for recipe-to-shopping workflows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shopping_instacartARead-onlyIdempotentInspect
Get Instacart delivery information for recipe ingredients. Returns a direct link to order groceries from a recipe through Instacart for same-day delivery. TRIGGERS:
'order ingredients for X', 'buy groceries for X recipe'
'Instacart delivery for X', 'shop for X on Instacart'
'deliver ingredients', 'get groceries delivered', 'order from Instacart'
| Name | Required | Description | Default |
|---|---|---|---|
| postalCode | No | Postal/ZIP code for delivery area and store availability (e.g. '98101', '10001') | |
| recipeSlug | Yes | Recipe slug to shop ingredients for (e.g. 'chicken-pho', 'pad-thai') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description complements the readOnlyHint=true annotation by clarifying that the tool returns a 'direct link' rather than actually placing an order or processing payment. It adds valuable context about the 'same-day delivery' service characteristic and confirms the output is a URL/link, not a confirmation of purchase, which aligns with the non-destructive, idempotent annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by the return value specification. The TRIGGERS section is structurally distinct and useful, though the formatting with bullet dashes inside the description string is slightly unconventional. No sentences are wasted; each provides distinct value about function, output, or activation patterns.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with only two parameters (both well-documented in schema) and no output schema, the description adequately explains what the tool returns (a direct link), the service context (Instacart same-day delivery), and when to use it (TRIGGERS). It does not need to explain return values in detail since the description covers the link nature sufficiently.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameters are already well-documented in the schema (recipeSlug and postalCode). The description mentions 'recipe ingredients' which implicitly maps to the recipeSlug parameter, but does not add significant semantic detail, syntax guidance, or examples beyond what the structured schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Get', 'Returns') and clearly identifies the resource as 'Instacart delivery information' and 'direct link to order groceries'. It distinguishes itself from sibling tools like 'shopping_generate_grocery_list' and recipe tools by explicitly mentioning the external shopping service, same-day delivery, and the transactional nature of the link generated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The TRIGGERS section provides explicit example phrases ('order ingredients for X', 'buy groceries for X recipe') that indicate when to invoke this tool. While it does not explicitly name alternatives (e.g., 'use shopping_generate_grocery_list instead for printable lists'), the triggers clearly delineate the specific user intents around purchasing and delivery that should activate this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!