Skip to main content
Glama

Server Details

Recipes MCP — wraps TheMealDB API (free tier, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-recipes
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.2/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_meal retrieves a specific recipe by ID, meals_by_ingredient finds meals containing an ingredient, random_meal fetches a random recipe, and search_meals searches by meal name. There is no overlap or ambiguity in functionality.

Naming Consistency4/5

Tool names follow a consistent verb_noun pattern (get_meal, random_meal, search_meals) with one minor deviation: meals_by_ingredient uses a prepositional phrase instead of a verb, but it remains readable and understandable.

Tool Count5/5

Four tools are well-scoped for a recipe server, covering key operations like retrieval, search, and discovery without being too sparse or overwhelming. Each tool serves a clear and necessary function.

Completeness3/5

The toolset covers basic retrieval and search operations well, but there are notable gaps: no create, update, or delete tools for managing recipes, which limits the server to read-only functionality. This may be intentional for a public API, but it restricts agent workflows.

Available Tools

4 tools
get_mealAInspect

Get the full recipe for a meal by its TheMealDB ID, including ingredients and instructions.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesTheMealDB meal ID (e.g., "52772")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states it's a read operation ('Get') but doesn't disclose behavioral traits like error handling (e.g., invalid ID), rate limits, authentication needs, or response format. For a tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Get the full recipe') and includes essential details (resource, ID, included data). Every word earns its place with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no nested objects) and high schema coverage, the description is adequate but incomplete. It lacks output details (no output schema) and behavioral context (no annotations), leaving gaps in understanding how to interpret results or handle errors.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter 'id' with its type and example. The description adds no additional parameter semantics beyond what the schema provides, such as format constraints or validation rules. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the full recipe'), target resource ('a meal by its TheMealDB ID'), and scope ('including ingredients and instructions'). It distinguishes from siblings like 'meals_by_ingredient' (filtering), 'random_meal' (no ID), and 'search_meals' (query-based).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you have a specific meal ID and need complete recipe details, distinguishing it from siblings that don't require an ID or provide full recipes. However, it doesn't explicitly state when NOT to use it or name alternatives like 'search_meals' for when you don't have an ID.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

meals_by_ingredientCInspect

Find meals that use a specific ingredient (e.g., "chicken", "garlic", "pasta").

ParametersJSON Schema
NameRequiredDescriptionDefault
ingredientYesIngredient name to filter by
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool finds meals by ingredient but doesn't describe what the output looks like (e.g., list format, pagination), whether it's a read-only operation, performance characteristics, or error conditions. This leaves significant gaps for an agent to understand how to use it effectively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function with helpful examples. There's no wasted verbiage, and it's appropriately front-loaded with the core purpose. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., meal names, IDs, full details), how results are structured, or any limitations. For a search/filter tool with no structured output documentation, this leaves too much undefined for reliable agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single 'ingredient' parameter. The description adds minimal value beyond the schema by providing examples of ingredient names, but doesn't explain format constraints, case sensitivity, or how partial matches work. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find meals that use a specific ingredient.' It includes a specific verb ('find') and resource ('meals'), and provides helpful examples ('chicken', 'garlic', 'pasta'). However, it doesn't explicitly differentiate from sibling tools like 'search_meals' or 'get_meal', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'search_meals' or 'get_meal'. It mentions the filtering capability but doesn't explain if this is the primary way to find meals by ingredient or if other tools might serve similar purposes. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

random_mealBInspect

Get a random meal recipe.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'gets' a random meal recipe, implying a read-only operation, but doesn't cover aspects like rate limits, error conditions, or what 'random' entails (e.g., selection criteria). This leaves significant gaps in understanding how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It's front-loaded and appropriately sized for a simple tool with no parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema), the description is minimally adequate but lacks details on output format or behavioral traits. Without annotations, it should ideally mention what the return value includes (e.g., recipe details) to be more complete for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add param info, which is appropriate here, but it could briefly note the lack of inputs for clarity. Baseline is 4 for 0 parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('random meal recipe'), making it easy to understand what it does. However, it doesn't explicitly differentiate from sibling tools like 'get_meal' or 'search_meals', which might also retrieve meal recipes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_meal' or 'search_meals'. It lacks context about scenarios where a random meal is preferred over a specific or filtered search, leaving the agent to infer usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_mealsCInspect

Search for recipes by meal name. Returns a list of matching meals.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesMeal name or partial name to search for
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool 'Returns a list of matching meals' which describes output format, but lacks critical details: whether results are paginated, sorted, or limited; if authentication is required; error conditions; or performance characteristics like rate limits. For a search tool with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two clear sentences that directly state the tool's function and output. No wasted words or unnecessary elaboration. However, it could be slightly more front-loaded by combining purpose and output in a single sentence for even better structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a simple search tool with 1 parameter, 100% schema coverage, no output schema, and no annotations, the description is minimally adequate. It covers basic purpose and output format but lacks important context about result limitations, sorting, authentication, or error handling. Without output schema, the description should ideally provide more detail about the return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the single 'query' parameter is fully described in the schema as 'Meal name or partial name to search for'). The description adds no additional parameter information beyond what the schema provides. According to guidelines, when schema coverage is high (>80%), the baseline is 3 even with no param info in description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search for recipes by meal name' specifies the verb (search) and resource (recipes/meals). It distinguishes from siblings like 'get_meal' (retrieve specific meal) and 'meals_by_ingredient' (search by ingredient), though not explicitly. However, it doesn't fully differentiate from 'random_meal' which also returns meals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to prefer 'search_meals' over 'meals_by_ingredient' for ingredient-based searches, or when to use 'get_meal' for known meal IDs versus searching. No explicit when/when-not statements or alternative tool references are included.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.