Skip to main content
Glama

Healthy Aging Atlas

Server Details

Evidence-ranked supplement data: search, compare, price history, goal recs. No API key.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
RustamIsmail/healthyagingatlas-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: compare_supplements for head-to-head comparisons, get_price_history for pricing trends, get_product for detailed product data, recommend_for_goal for goal-based recommendations, and search_supplements for general searching. An agent can easily distinguish between these functions without confusion.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with clear, descriptive verbs (compare, get, recommend, search) and specific nouns (supplements, price_history, product, goal). There are no deviations in style or convention, making the set highly predictable and readable.

Tool Count5/5

With 5 tools, this server is well-scoped for its supplement and health goal domain. Each tool serves a distinct and necessary function, from searching and retrieving data to comparisons and recommendations, without being overly sparse or bloated.

Completeness5/5

The tool set provides complete coverage for the supplement domain: search_supplements and get_product for discovery and details, compare_supplements for evaluation, recommend_for_goal for personalized advice, and get_price_history for purchasing insights. There are no obvious gaps, enabling agents to handle typical workflows end-to-end.

Available Tools

5 tools
compare_supplementsCompare SupplementsA
Read-onlyIdempotent
Inspect

Head-to-head comparison of two supplements. Returns key differences, evidence quality, best use cases, safety notes, and a verdict with affiliate purchase links.

ParametersJSON Schema
NameRequiredDescriptionDefault
supplement_aYesFirst supplement slug (e.g. "magnesium-glycinate", "nmn", "fish-oil")
supplement_bYesSecond supplement slug (e.g. "magnesium-citrate", "nr", "krill-oil")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, idempotent, and closed-world behavior. The description adds valuable context beyond this: it discloses that the tool returns 'affiliate purchase links' (commercial aspect), 'safety notes' (risk information), and 'evidence quality' (methodological transparency), which aren't covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first clause, followed by a concise list of return components. Every sentence earns its place by specifying output details without redundancy. It's appropriately sized for a tool with clear functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (comparison with multiple output aspects), no output schema, and rich annotations, the description is mostly complete. It details the return components (differences, evidence, use cases, safety, verdict, links), but could benefit from mentioning format or limitations (e.g., supplement database scope).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, fully documenting both required parameters with examples. The description doesn't add any parameter-specific information beyond what the schema provides (e.g., no additional constraints or usage tips for the slugs). Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('head-to-head comparison') and resource ('two supplements'), distinguishing it from siblings like get_product (single product info) or search_supplements (multi-product search). It specifies the comparative nature with 'key differences' and 'verdict' outputs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating it's for 'head-to-head comparison' of two supplements, suggesting when to use it (for direct comparison). However, it doesn't explicitly state when not to use it or name alternatives like get_product for single supplement details or recommend_for_goal for goal-based recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_price_historyGet Price HistoryA
Read-onlyIdempotent
Inspect

Retrieve price history and freshness data for a supplement product. Shows current price, historical observations, and price trend.

ParametersJSON Schema
NameRequiredDescriptionDefault
product_idYesProduct ID or partial product name to look up price history for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as a read-only, non-destructive, idempotent, and closed-world operation. The description adds valuable context beyond annotations by specifying what data is retrieved ('current price, historical observations, and price trend') and the resource type ('supplement product'), which helps the agent understand the scope and output structure. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and efficiently lists key data points in the second. Every sentence adds value without redundancy, making it appropriately sized and easy to parse for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter), rich annotations covering safety and behavior, and no output schema, the description is reasonably complete. It specifies the data returned and resource type, though it could benefit from mentioning limitations (e.g., date ranges for history) or output format details to achieve a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'product_id' fully documented in the schema. The description doesn't add any parameter-specific details beyond what the schema provides (e.g., no examples of product IDs or clarification on 'partial product name'). Baseline score of 3 is appropriate since the schema carries the full burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Retrieve', 'Shows') and resources ('price history and freshness data for a supplement product'). It distinguishes from siblings by focusing on price history rather than comparison, product details, recommendations, or search. However, it doesn't explicitly differentiate from 'get_product' which might also include price data, making it a 4 rather than a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when price history is needed for a supplement product, but provides no explicit guidance on when to use this tool versus alternatives like 'get_product' (which might include current price) or 'compare_supplements' (which could involve price comparisons). There's no mention of prerequisites or exclusions, leaving usage context somewhat vague.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_productGet Product DetailsA
Read-onlyIdempotent
Inspect

Retrieve detailed data for a specific supplement product: full ingredient profile, trust score, third-party certifications, pricing, and evidence summary.

ParametersJSON Schema
NameRequiredDescriptionDefault
product_idYesProduct ID (brand-name slug) or partial product name
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide key behavioral hints (read-only, non-destructive, idempotent, closed-world), so the bar is lower. The description adds valuable context by specifying the types of data returned (e.g., trust score, certifications, evidence summary), which helps the agent understand the output structure and richness beyond what annotations cover, though it doesn't detail rate limits or auth needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently lists all key data fields without redundancy. It is front-loaded with the main action and resource, and every element (ingredient profile, trust score, etc.) adds specific value, making it highly concise and zero-waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema), rich annotations, and high schema coverage, the description is largely complete. It provides clear purpose and output details, though it could slightly improve by mentioning when to use versus siblings. The absence of an output schema is mitigated by the detailed description of return data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'product_id' fully documented in the schema. The description does not add any additional meaning or syntax details beyond what the schema provides (e.g., it doesn't clarify 'brand-name slug' further). Baseline 3 is appropriate as the schema carries the full burden of parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Retrieve detailed data') and resource ('specific supplement product'), listing concrete data fields like ingredient profile, trust score, certifications, pricing, and evidence summary. It distinguishes from siblings like 'search_supplements' (which likely returns multiple products) and 'compare_supplements' (which involves multiple products), making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving detailed data on a single product, but does not explicitly state when to use this tool versus alternatives like 'search_supplements' (for broader searches) or 'get_price_history' (for historical data). It provides context (e.g., 'specific supplement product') but lacks explicit guidance on exclusions or named alternatives, leaving some ambiguity for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommend_for_goalRecommend for GoalA
Read-onlyIdempotent
Inspect

Get evidence-ranked supplement recommendations for a specific health goal, with optional budget and demographic filters. Returns the top products with a "why" explanation.

ParametersJSON Schema
NameRequiredDescriptionDefault
goalYesHealth goal slug (e.g. "sleep", "heart-health", "brain-health", "anti-aging", "muscle-recovery")
limitNoNumber of recommendations (1–10)
budget_usdNoMaximum monthly cost in USD — filters out higher-priced products
demographicNoScore context — default: general
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent operation with a closed-world scope, covering safety and reliability. The description adds valuable context by specifying that recommendations are 'evidence-ranked' and include a 'why' explanation, which clarifies the ranking methodology and output format beyond what annotations provide, though it doesn't detail rate limits or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the core purpose ('Get evidence-ranked supplement recommendations for a specific health goal') and efficiently adds key details about filters and output. Every word earns its place with no redundancy or waste, making it highly concise and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, 1 required), rich annotations (covering read-only, non-destructive, etc.), and 100% schema coverage, the description is largely complete. It adds context on evidence-ranking and output explanation, but without an output schema, it could benefit from more detail on the return structure (e.g., product fields). However, it adequately supports agent usage for the core functionality.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters (goal, limit, budget_usd, demographic). The description adds minimal semantics by mentioning 'optional budget and demographic filters' and the output format, but it doesn't provide additional meaning beyond the schema's detailed descriptions (e.g., goal slug examples, limit range, demographic enum values). This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get evidence-ranked supplement recommendations') and resource ('for a specific health goal'), distinguishing it from siblings like compare_supplements or search_supplements by focusing on goal-based ranking rather than comparison or general search. It explicitly mentions the output format ('top products with a "why" explanation'), which further clarifies its unique purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying 'for a specific health goal' and mentions optional filters, but it does not explicitly state when to use this tool versus alternatives like compare_supplements or search_supplements. No guidance is provided on prerequisites, exclusions, or comparative contexts, leaving the agent to infer usage from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_supplementsSearch SupplementsA
Read-onlyIdempotent
Inspect

Search for supplements by name, ingredient, or health benefit. Returns ranked products with trust scores, prices, and affiliate-tagged purchase links.

ParametersJSON Schema
NameRequiredDescriptionDefault
goalNoFilter by health goal slug (e.g. "sleep", "heart-health", "anti-aging")
limitNoMaximum results (1–20)
queryYesSearch term — supplement name, brand, or ingredient (e.g. "magnesium", "NMN", "Life Extension")
demographicNoScore context — default: general
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only, non-destructive, idempotent, and closed-world, which the description doesn't contradict. The description adds valuable behavioral context beyond annotations by specifying that results are 'ranked' with 'trust scores, prices, and affiliate-tagged purchase links,' which helps the agent understand the return format and commercial aspects not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently communicates purpose, search dimensions, and return format. Every element earns its place with no wasted words, making it easy to parse and front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with rich annotations and full schema coverage but no output schema, the description provides good context on purpose, usage, and return format. However, it could be more complete by explicitly mentioning the ranking algorithm or result limitations, though the annotations and schema cover most safety and parameter aspects adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all 4 parameters. The description mentions search criteria ('name, ingredient, or health benefit') which aligns with the 'query' parameter but doesn't add significant meaning beyond what the schema provides. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Search for supplements') and resource ('supplements'), specifies search criteria ('by name, ingredient, or health benefit'), and distinguishes from siblings by focusing on broad search rather than comparison, price history, single product retrieval, or goal-based recommendation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for general supplement discovery with multiple search dimensions, but doesn't explicitly state when to choose this over alternatives like 'recommend_for_goal' for targeted recommendations or 'get_product' for specific product details. The context is clear but lacks explicit exclusions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.