Skip to main content
Glama

Server Quality Checklist

67%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool targets a distinct phase of nutrition management: calculating metabolic needs (calculate_tdee), retrieving food data (lookup_nutrition), creating eating plans (generate_meal_plan), addressing specific health issues (fix_deficiency), and evaluating dietary quality (nutrition_score). No functional overlap exists between tools.

    Naming Consistency4/5

    Four tools follow a clear verb_noun pattern (calculate_tdee, fix_deficiency, generate_meal_plan, lookup_nutrition) using snake_case. However, nutrition_score breaks the pattern by placing the noun first, creating a minor inconsistency in an otherwise uniform convention.

    Tool Count5/5

    Five tools is an ideal scope for a focused nutrition server, covering the essential workflows (calculation, lookup, planning, deficiency correction, and evaluation) without bloat or unnecessary fragmentation.

    Completeness4/5

    The surface covers the core nutrition lifecycle well: determining needs, sourcing foods, planning meals, correcting deficiencies, and scoring intake. A minor gap exists in exploratory discovery (e.g., searching for foods by nutrient criteria rather than by name), though fix_deficiency partially addresses this for specific nutrients.

  • Average 4.2/5 across 5 of 5 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • Add a LICENSE file by following GitHub's guide.

    MCP servers without a LICENSE cannot be installed.

  • Latest release: v1.0.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 5 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true. The description adds context that calculations are 'personalised' based on inputs, but does not describe error handling for invalid parameter combinations, rate limits, or the specific macro calculation methodology (e.g., percentage-based vs fixed protein).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. First sentence defines function and scope; second sentence provides usage triggers. Front-loaded with the most critical information (what it calculates) and avoids redundancy with schema details.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite lacking an output schema, the description explicitly lists the three output categories (TDEE, BMR, macro targets) and their components (protein, carbs, fat), giving the agent clear expectations of return value structure. Deducted one point as it could clarify whether results include per-meal or daily targets.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with all 6 parameters fully documented (weight_kg, height_cm, age, gender, activity_level, goal). The description references 'user's stats and goal' generally but does not add semantic details, validation logic, or input guidance beyond the schema definitions. Baseline 3 is appropriate when schema carries full load.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description specifies exact calculations performed (TDEE, BMR, macro targets) and the inputs required (user's stats and goal). It clearly distinguishes from siblings like 'lookup_nutrition' (food data) or 'generate_meal_plan' (meal construction) by focusing on metabolic calculations.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use triggers ('when someone asks how many calories they should eat', 'maintenance calories', 'how to set up their macros'). Lacks explicit when-NOT-to-use or named sibling alternatives (e.g., don't use for specific food nutrition lookup).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already establish readOnly, idempotent, and non-destructive traits. The description adds value by specifying the output structure (four specific meal types), but does not disclose potential limitations, caching behavior, or detailed return format beyond the meal structure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficiently structured sentences with zero waste. Front-loaded with the core action and output, followed immediately by usage conditions. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    With 100% schema coverage and comprehensive annotations, the description adequately covers the tool's purpose. It hints at return structure via the four meal types, though it could explicitly state that results include specific food suggestions or macro breakdowns given the lack of output schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with clear enums and constraints documented. The description mentions the three parameter concepts but adds no semantic details (e.g., valid ranges, default behavior) beyond what the schema already provides, meeting the baseline for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action ('Generate'), resource ('full day meal plan'), and detailed scope ('breakfast, lunch, snack, dinner'). Clearly distinguishes from siblings like calculate_tdee (calculation) and lookup_nutrition (lookup) by focusing on comprehensive meal generation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use guidance ('Use this when someone asks for a meal plan...'). However, lacks explicit differentiation from sibling calculate_tdee, which also deals with calorie goals but performs calculations rather than meal structuring.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnly/idempotent status, so the description appropriately focuses on return value structure rather than safety. It adds valuable context about what the action plan contains (foods with serving sizes, supplement advice, deficiency symptoms) that is not indicated in annotations or schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste: first sentence front-loads the purpose and return value details, second sentence provides usage triggers. Every clause earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the moderate complexity (3 parameters, simple types) and lack of output schema, the description adequately compensates by detailing the expected action plan components. Could mention error handling for unsupported nutrients, but the schema enum mitigates this need.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage (nutrient, gender, age all well-documented), the baseline is 3. The description implies the nutrient parameter through 'specific nutritional deficiency' but does not add syntax details or semantic clarifications beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb-resource combination ('Get a detailed action plan to fix a specific nutritional deficiency') and clearly distinguishes this from siblings like lookup_nutrition (general lookups) and generate_meal_plan (general meal planning) by focusing specifically on deficiency correction.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit 'Use this when' guidance with three specific query patterns (increasing a nutrient, what to eat for deficiency, causes of low levels). Lacks explicit 'when not to use' or named sibling alternatives, but the positive guidance is clear and actionable.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations cover safety (readOnlyHint, destructiveHint) and idempotency, so the description appropriately focuses on content-specific behavior. It discloses what data is returned by listing specific nutrients (calories, protein, carbs, fat, fibre, key micronutrients), adding valuable context beyond the annotations. No contradictions present.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences, zero waste. First sentence front-loads the specific nutrients included in the profile. Second sentence provides usage context. Every word earns its place; no redundant fluff or repetition of schema/annotation details.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple lookup tool with rich annotations and complete input schema coverage, the description is adequate. It compensates for the missing output schema by listing the specific nutrient fields returned. Minor gap: doesn't mention error handling (e.g., food not found behavior).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema fully documents both parameters (food_name with examples, amount_grams with default). The description mentions 'by name and serving size' which maps to the parameters, but doesn't add syntax details beyond the schema. Baseline 3 is appropriate when schema coverage is comprehensive.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Look up') and resource ('nutritional profile'), explicitly listing the nutrients returned (calories, protein, carbs, fat, fibre, key micronutrients). It clearly distinguishes from siblings like calculate_tdee or generate_meal_plan by focusing on specific food lookup rather than calculations or planning.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The second sentence provides clear positive guidance: 'Use this when someone asks about the nutrition or macros in a specific food.' However, it lacks explicit negative constraints or named alternatives (e.g., it doesn't direct users to calculate_tdee for energy expenditure calculations).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare read-only, non-destructive, idempotent properties. The description adds valuable output context ('breakdown by category, a letter grade, and actionable recommendations') since no output schema exists. Does not mention calculation methodology or error conditions, but covers the critical gap of return value structure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences, zero waste. Front-loaded with core functionality (calculation and range), followed by output description, then usage conditions. Every clause provides distinct information (scope, inputs, outputs, use-cases).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a 9-parameter tool with no output schema, the description adequately compensates by outlining the three components of the return value. Good annotations cover safety profile. Minor gap: does not specify the structure/format of the 'breakdown' object or recommendation list.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage (baseline 3), the description adds semantic grouping by categorizing inputs as 'macros' (calories, protein, carbs, fat) and 'optional micronutrient data' (fiber, vegetables, water), helping agents understand parameter relationships beyond the schema's flat list.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Excellent specificity: states the exact calculation (nutrition quality score 0–100), inputs (macros, optional micronutrient data), and deliverables (breakdown, letter grade, recommendations). Clearly distinguishes from siblings like generate_meal_plan (planning) and lookup_nutrition (food lookup) by focusing on evaluating existing intake.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit positive guidance ('Use this when someone wants to rate their diet, check if they're eating well, or get feedback on a day's meals') covering three distinct use cases. Lacks explicit negative constraints ('do not use for...') or named sibling alternatives, preventing a 5.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

nutribalance-mcp MCP server

Copy to your README.md:

Score Badge

nutribalance-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/thenutritrackerapp-creator/nutribalance-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server