recipes
Server Details
Recipes MCP — wraps TheMealDB API (free tier, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-recipes
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.2/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose: get_meal retrieves a specific recipe by ID, meals_by_ingredient finds meals containing an ingredient, random_meal fetches a random recipe, and search_meals searches by meal name. There is no overlap or ambiguity in functionality.
Tool names follow a consistent verb_noun pattern (get_meal, random_meal, search_meals) with one minor deviation: meals_by_ingredient uses a prepositional phrase instead of a verb, but it remains readable and understandable.
Four tools are well-scoped for a recipe server, covering key operations like retrieval, search, and discovery without being too sparse or overwhelming. Each tool serves a clear and necessary function.
The toolset covers basic retrieval and search operations well, but there are notable gaps: no create, update, or delete tools for managing recipes, which limits the server to read-only functionality. This may be intentional for a public API, but it restricts agent workflows.
Available Tools
4 toolsget_mealAInspect
Get the full recipe for a meal by its TheMealDB ID, including ingredients and instructions.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | TheMealDB meal ID (e.g., "52772") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states it's a read operation ('Get') but doesn't disclose behavioral traits like error handling (e.g., invalid ID), rate limits, authentication needs, or response format. For a tool with zero annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Get the full recipe') and includes essential details (resource, ID, included data). Every word earns its place with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no nested objects) and high schema coverage, the description is adequate but incomplete. It lacks output details (no output schema) and behavioral context (no annotations), leaving gaps in understanding how to interpret results or handle errors.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single parameter 'id' with its type and example. The description adds no additional parameter semantics beyond what the schema provides, such as format constraints or validation rules. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get the full recipe'), target resource ('a meal by its TheMealDB ID'), and scope ('including ingredients and instructions'). It distinguishes from siblings like 'meals_by_ingredient' (filtering), 'random_meal' (no ID), and 'search_meals' (query-based).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you have a specific meal ID and need complete recipe details, distinguishing it from siblings that don't require an ID or provide full recipes. However, it doesn't explicitly state when NOT to use it or name alternatives like 'search_meals' for when you don't have an ID.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
meals_by_ingredientCInspect
Find meals that use a specific ingredient (e.g., "chicken", "garlic", "pasta").
| Name | Required | Description | Default |
|---|---|---|---|
| ingredient | Yes | Ingredient name to filter by |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool finds meals by ingredient but doesn't describe what the output looks like (e.g., list format, pagination), whether it's a read-only operation, performance characteristics, or error conditions. This leaves significant gaps for an agent to understand how to use it effectively.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's function with helpful examples. There's no wasted verbiage, and it's appropriately front-loaded with the core purpose. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., meal names, IDs, full details), how results are structured, or any limitations. For a search/filter tool with no structured output documentation, this leaves too much undefined for reliable agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single 'ingredient' parameter. The description adds minimal value beyond the schema by providing examples of ingredient names, but doesn't explain format constraints, case sensitivity, or how partial matches work. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find meals that use a specific ingredient.' It includes a specific verb ('find') and resource ('meals'), and provides helpful examples ('chicken', 'garlic', 'pasta'). However, it doesn't explicitly differentiate from sibling tools like 'search_meals' or 'get_meal', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'search_meals' or 'get_meal'. It mentions the filtering capability but doesn't explain if this is the primary way to find meals by ingredient or if other tools might serve similar purposes. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
random_mealBInspect
Get a random meal recipe.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'gets' a random meal recipe, implying a read-only operation, but doesn't cover aspects like rate limits, error conditions, or what 'random' entails (e.g., selection criteria). This leaves significant gaps in understanding how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It's front-loaded and appropriately sized for a simple tool with no parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema), the description is minimally adequate but lacks details on output format or behavioral traits. Without annotations, it should ideally mention what the return value includes (e.g., recipe details) to be more complete for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add param info, which is appropriate here, but it could briefly note the lack of inputs for clarity. Baseline is 4 for 0 parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('random meal recipe'), making it easy to understand what it does. However, it doesn't explicitly differentiate from sibling tools like 'get_meal' or 'search_meals', which might also retrieve meal recipes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_meal' or 'search_meals'. It lacks context about scenarios where a random meal is preferred over a specific or filtered search, leaving the agent to infer usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_mealsCInspect
Search for recipes by meal name. Returns a list of matching meals.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Meal name or partial name to search for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool 'Returns a list of matching meals' which describes output format, but lacks critical details: whether results are paginated, sorted, or limited; if authentication is required; error conditions; or performance characteristics like rate limits. For a search tool with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two clear sentences that directly state the tool's function and output. No wasted words or unnecessary elaboration. However, it could be slightly more front-loaded by combining purpose and output in a single sentence for even better structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a simple search tool with 1 parameter, 100% schema coverage, no output schema, and no annotations, the description is minimally adequate. It covers basic purpose and output format but lacks important context about result limitations, sorting, authentication, or error handling. Without output schema, the description should ideally provide more detail about the return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (the single 'query' parameter is fully described in the schema as 'Meal name or partial name to search for'). The description adds no additional parameter information beyond what the schema provides. According to guidelines, when schema coverage is high (>80%), the baseline is 3 even with no param info in description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search for recipes by meal name' specifies the verb (search) and resource (recipes/meals). It distinguishes from siblings like 'get_meal' (retrieve specific meal) and 'meals_by_ingredient' (search by ingredient), though not explicitly. However, it doesn't fully differentiate from 'random_meal' which also returns meals.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to prefer 'search_meals' over 'meals_by_ingredient' for ingredient-based searches, or when to use 'get_meal' for known meal IDs versus searching. No explicit when/when-not statements or alternative tool references are included.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!