Healthy Aging Atlas
Server Details
Evidence-ranked supplement data: search, compare, price history, goal recs. No API key.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- RustamIsmail/healthyagingatlas-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 5 of 5 tools scored.
Each tool has a clearly distinct purpose with no overlap: compare_supplements for head-to-head comparisons, get_price_history for pricing trends, get_product for detailed product data, recommend_for_goal for goal-based recommendations, and search_supplements for general searching. An agent can easily distinguish between these functions without confusion.
All tool names follow a consistent verb_noun pattern with clear, descriptive verbs (compare, get, recommend, search) and specific nouns (supplements, price_history, product, goal). There are no deviations in style or convention, making the set highly predictable and readable.
With 5 tools, this server is well-scoped for its supplement and health goal domain. Each tool serves a distinct and necessary function, from searching and retrieving data to comparisons and recommendations, without being overly sparse or bloated.
The tool set provides complete coverage for the supplement domain: search_supplements and get_product for discovery and details, compare_supplements for evaluation, recommend_for_goal for personalized advice, and get_price_history for purchasing insights. There are no obvious gaps, enabling agents to handle typical workflows end-to-end.
Available Tools
5 toolscompare_supplementsCompare SupplementsARead-onlyIdempotentInspect
Head-to-head comparison of two supplements. Returns key differences, evidence quality, best use cases, safety notes, and a verdict with affiliate purchase links.
| Name | Required | Description | Default |
|---|---|---|---|
| supplement_a | Yes | First supplement slug (e.g. "magnesium-glycinate", "nmn", "fish-oil") | |
| supplement_b | Yes | Second supplement slug (e.g. "magnesium-citrate", "nr", "krill-oil") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, idempotent, and closed-world behavior. The description adds valuable context beyond this: it discloses that the tool returns 'affiliate purchase links' (commercial aspect), 'safety notes' (risk information), and 'evidence quality' (methodological transparency), which aren't covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first clause, followed by a concise list of return components. Every sentence earns its place by specifying output details without redundancy. It's appropriately sized for a tool with clear functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (comparison with multiple output aspects), no output schema, and rich annotations, the description is mostly complete. It details the return components (differences, evidence, use cases, safety, verdict, links), but could benefit from mentioning format or limitations (e.g., supplement database scope).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, fully documenting both required parameters with examples. The description doesn't add any parameter-specific information beyond what the schema provides (e.g., no additional constraints or usage tips for the slugs). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('head-to-head comparison') and resource ('two supplements'), distinguishing it from siblings like get_product (single product info) or search_supplements (multi-product search). It specifies the comparative nature with 'key differences' and 'verdict' outputs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating it's for 'head-to-head comparison' of two supplements, suggesting when to use it (for direct comparison). However, it doesn't explicitly state when not to use it or name alternatives like get_product for single supplement details or recommend_for_goal for goal-based recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_price_historyGet Price HistoryARead-onlyIdempotentInspect
Retrieve price history and freshness data for a supplement product. Shows current price, historical observations, and price trend.
| Name | Required | Description | Default |
|---|---|---|---|
| product_id | Yes | Product ID or partial product name to look up price history for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as a read-only, non-destructive, idempotent, and closed-world operation. The description adds valuable context beyond annotations by specifying what data is retrieved ('current price, historical observations, and price trend') and the resource type ('supplement product'), which helps the agent understand the scope and output structure. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence and efficiently lists key data points in the second. Every sentence adds value without redundancy, making it appropriately sized and easy to parse for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter), rich annotations covering safety and behavior, and no output schema, the description is reasonably complete. It specifies the data returned and resource type, though it could benefit from mentioning limitations (e.g., date ranges for history) or output format details to achieve a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'product_id' fully documented in the schema. The description doesn't add any parameter-specific details beyond what the schema provides (e.g., no examples of product IDs or clarification on 'partial product name'). Baseline score of 3 is appropriate since the schema carries the full burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Retrieve', 'Shows') and resources ('price history and freshness data for a supplement product'). It distinguishes from siblings by focusing on price history rather than comparison, product details, recommendations, or search. However, it doesn't explicitly differentiate from 'get_product' which might also include price data, making it a 4 rather than a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when price history is needed for a supplement product, but provides no explicit guidance on when to use this tool versus alternatives like 'get_product' (which might include current price) or 'compare_supplements' (which could involve price comparisons). There's no mention of prerequisites or exclusions, leaving usage context somewhat vague.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_productGet Product DetailsARead-onlyIdempotentInspect
Retrieve detailed data for a specific supplement product: full ingredient profile, trust score, third-party certifications, pricing, and evidence summary.
| Name | Required | Description | Default |
|---|---|---|---|
| product_id | Yes | Product ID (brand-name slug) or partial product name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide key behavioral hints (read-only, non-destructive, idempotent, closed-world), so the bar is lower. The description adds valuable context by specifying the types of data returned (e.g., trust score, certifications, evidence summary), which helps the agent understand the output structure and richness beyond what annotations cover, though it doesn't detail rate limits or auth needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently lists all key data fields without redundancy. It is front-loaded with the main action and resource, and every element (ingredient profile, trust score, etc.) adds specific value, making it highly concise and zero-waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema), rich annotations, and high schema coverage, the description is largely complete. It provides clear purpose and output details, though it could slightly improve by mentioning when to use versus siblings. The absence of an output schema is mitigated by the detailed description of return data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'product_id' fully documented in the schema. The description does not add any additional meaning or syntax details beyond what the schema provides (e.g., it doesn't clarify 'brand-name slug' further). Baseline 3 is appropriate as the schema carries the full burden of parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve detailed data') and resource ('specific supplement product'), listing concrete data fields like ingredient profile, trust score, certifications, pricing, and evidence summary. It distinguishes from siblings like 'search_supplements' (which likely returns multiple products) and 'compare_supplements' (which involves multiple products), making the purpose highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving detailed data on a single product, but does not explicitly state when to use this tool versus alternatives like 'search_supplements' (for broader searches) or 'get_price_history' (for historical data). It provides context (e.g., 'specific supplement product') but lacks explicit guidance on exclusions or named alternatives, leaving some ambiguity for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recommend_for_goalRecommend for GoalARead-onlyIdempotentInspect
Get evidence-ranked supplement recommendations for a specific health goal, with optional budget and demographic filters. Returns the top products with a "why" explanation.
| Name | Required | Description | Default |
|---|---|---|---|
| goal | Yes | Health goal slug (e.g. "sleep", "heart-health", "brain-health", "anti-aging", "muscle-recovery") | |
| limit | No | Number of recommendations (1–10) | |
| budget_usd | No | Maximum monthly cost in USD — filters out higher-priced products | |
| demographic | No | Score context — default: general |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a read-only, non-destructive, idempotent operation with a closed-world scope, covering safety and reliability. The description adds valuable context by specifying that recommendations are 'evidence-ranked' and include a 'why' explanation, which clarifies the ranking methodology and output format beyond what annotations provide, though it doesn't detail rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the core purpose ('Get evidence-ranked supplement recommendations for a specific health goal') and efficiently adds key details about filters and output. Every word earns its place with no redundancy or waste, making it highly concise and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, 1 required), rich annotations (covering read-only, non-destructive, etc.), and 100% schema coverage, the description is largely complete. It adds context on evidence-ranking and output explanation, but without an output schema, it could benefit from more detail on the return structure (e.g., product fields). However, it adequately supports agent usage for the core functionality.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters (goal, limit, budget_usd, demographic). The description adds minimal semantics by mentioning 'optional budget and demographic filters' and the output format, but it doesn't provide additional meaning beyond the schema's detailed descriptions (e.g., goal slug examples, limit range, demographic enum values). This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get evidence-ranked supplement recommendations') and resource ('for a specific health goal'), distinguishing it from siblings like compare_supplements or search_supplements by focusing on goal-based ranking rather than comparison or general search. It explicitly mentions the output format ('top products with a "why" explanation'), which further clarifies its unique purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying 'for a specific health goal' and mentions optional filters, but it does not explicitly state when to use this tool versus alternatives like compare_supplements or search_supplements. No guidance is provided on prerequisites, exclusions, or comparative contexts, leaving the agent to infer usage from the purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_supplementsSearch SupplementsARead-onlyIdempotentInspect
Search for supplements by name, ingredient, or health benefit. Returns ranked products with trust scores, prices, and affiliate-tagged purchase links.
| Name | Required | Description | Default |
|---|---|---|---|
| goal | No | Filter by health goal slug (e.g. "sleep", "heart-health", "anti-aging") | |
| limit | No | Maximum results (1–20) | |
| query | Yes | Search term — supplement name, brand, or ingredient (e.g. "magnesium", "NMN", "Life Extension") | |
| demographic | No | Score context — default: general |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as read-only, non-destructive, idempotent, and closed-world, which the description doesn't contradict. The description adds valuable behavioral context beyond annotations by specifying that results are 'ranked' with 'trust scores, prices, and affiliate-tagged purchase links,' which helps the agent understand the return format and commercial aspects not covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently communicates purpose, search dimensions, and return format. Every element earns its place with no wasted words, making it easy to parse and front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with rich annotations and full schema coverage but no output schema, the description provides good context on purpose, usage, and return format. However, it could be more complete by explicitly mentioning the ranking algorithm or result limitations, though the annotations and schema cover most safety and parameter aspects adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all 4 parameters. The description mentions search criteria ('name, ingredient, or health benefit') which aligns with the 'query' parameter but doesn't add significant meaning beyond what the schema provides. The baseline of 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Search for supplements') and resource ('supplements'), specifies search criteria ('by name, ingredient, or health benefit'), and distinguishes from siblings by focusing on broad search rather than comparison, price history, single product retrieval, or goal-based recommendation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for general supplement discovery with multiple search dimensions, but doesn't explicitly state when to choose this over alternatives like 'recommend_for_goal' for targeted recommendations or 'get_product' for specific product details. The context is clear but lacks explicit exclusions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!