Skip to main content
Glama

Server Details

Marathon fueling, pace, hydration, heat, carb-loading, and gel-comparison calculators.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: carb-loading focuses on pre-race nutrition, fueling-plan on in-race fueling, gel-comparison on product data, heat-adjustment on temperature effects, hydration on fluid needs, pace-calculator on pace/time conversions, and race-time-predictor on performance projections. The descriptions make these distinctions explicit, eliminating any ambiguity.

Naming Consistency4/5

The tools follow a consistent snake_case naming pattern with descriptive compound names (e.g., 'carb-loading', 'fueling-plan'), except for 'gel-comparison' which uses a hyphen instead of underscore, slightly deviating from the others. Overall, the naming is predictable and readable, with minor inconsistencies.

Tool Count5/5

With 7 tools, the server is well-scoped for its endurance sports fueling and planning domain. Each tool serves a specific, non-redundant function, covering key aspects like nutrition, hydration, pacing, and environmental adjustments, making the count appropriate and efficient.

Completeness5/5

The tool set provides comprehensive coverage for endurance sports planning, including pre-race preparation (carb-loading, race-time-predictor), in-race execution (fueling-plan, hydration, heat-adjustment), and supporting utilities (gel-comparison, pace-calculator). There are no obvious gaps, and agents can handle full workflows from planning to adjustment.

Available Tools

7 tools
carb-loadingCarb Loading CalculatorA
Read-onlyIdempotent
Inspect

Calculate pre-race carb loading targets by race type, body weight, finish time, and experience level. Returns daily carb target, protocol length, total carbs, and pre-race meal size.

ParametersJSON Schema
NameRequiredDescriptionDefault
raceTypeYesRace distance category.
experienceYesRunner experience level.
hasGIIssuesNoWhether the runner has a sensitive stomach. Default false.
bodyWeightLbsYesBody weight in pounds.
finishTimeHoursYesExpected finish time, hours portion.
finishTimeMinutesYesExpected finish time, minutes portion.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only, non-destructive, and idempotent. The description adds valuable context about what the tool returns (daily carb target, protocol length, total carbs, pre-race meal size) which isn't covered by annotations. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first sentence clearly states purpose and inputs, the second specifies outputs. Every word earns its place with no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a calculation tool with good annotations and complete schema coverage, the description provides adequate context. The lack of output schema is partially compensated by the description specifying return values. However, more detail about calculation methodology or assumptions would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all parameters thoroughly. The description mentions the parameters generally but doesn't add specific meaning beyond what's in the schema. Baseline 3 is appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Calculate pre-race carb loading targets') and the resources involved (race type, body weight, finish time, experience level). It distinguishes from sibling tools by focusing specifically on carb loading calculation rather than general fueling, hydration, pace, or race prediction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('pre-race carb loading targets') but doesn't explicitly state when to use this tool versus alternatives like 'fueling-plan' or 'hydration'. No specific exclusions or prerequisites are mentioned, leaving some ambiguity about tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fueling-planFueling PlanA
Read-onlyIdempotent
Inspect

Build a minute-by-minute race or long-run fueling plan. Returns target carbs/hour, gel count, gel timing schedule, and total carbs. This is the core FuelCenter tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursYesRace or run duration, hours portion.
bufferNoMinutes of buffer at start and end where no gel is taken. Default 5.
gelKeyYesGel product key. See /api/tools/gel-comparison for full catalog.
minutesYesRace or run duration, minutes portion.
drinkMixNoWhether the runner uses a carbohydrate drink mix. Default false.
intensityYesEffort level.
targetCPHNoTarget carbs per hour. If omitted, default is derived from guideline.
experienceNoRunner experience with fueling. Default "standard".
drinkCarbsTotalNoTotal grams of carbs from drink mix across the run. Default 0.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds minimal behavioral context beyond this, noting it returns specific data types but not detailing rate limits, authentication needs, or computational intensity. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and efficiently lists outputs in the second, with no wasted words. Every sentence earns its place by clearly conveying the tool's function and scope, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (9 parameters, no output schema) and rich annotations, the description is mostly complete. It clearly states what the tool does and returns, but lacks details on output format or error handling. However, with annotations covering key behavioral traits, it provides sufficient context for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 9 parameters. The description adds no additional parameter semantics beyond implying the tool uses inputs like duration and intensity to generate outputs. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't enhance parameter understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Build a minute-by-minute race or long-run fueling plan') and resources (target carbs/hour, gel count, gel timing schedule, total carbs). It distinguishes itself from siblings by being 'the core FuelCenter tool' for creating comprehensive fueling plans, unlike comparison or adjustment tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for race or long-run fueling planning but provides no explicit guidance on when to use this tool versus alternatives like 'gel-comparison' or 'hydration'. It mentions being the 'core' tool, which suggests primacy, but lacks specific when/when-not instructions or named alternatives for different scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gel-comparisonGel ComparisonA
Read-onlyIdempotent
Inspect

Return the FuelCenter gel catalog (SiS Beta Fuel, Maurten, Precision, GU, Tailwind, Spring, Carbs Fuel, neversecond, Huma) with carbs, price, caffeine, weight, and carbs-per-dollar. Optionally sort.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortKeyNoColumn to sort by. Default "carbsPerDollar".
sortDirectionNoSort direction. Default "desc".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds value by specifying the catalog scope (brands and fields) and optional sorting behavior, though it doesn't mention rate limits or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, efficiently lists brands and data fields, and ends with the optional feature. It uses two concise sentences with zero wasted words, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 optional parameters), rich annotations, and 100% schema coverage, the description is largely complete. It specifies the catalog content and sorting option. However, without an output schema, it could benefit from hinting at the return format (e.g., tabular data).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters (sortKey and sortDirection) fully documented in the schema. The description adds minimal value beyond the schema by mentioning 'Optionally sort', but doesn't explain parameter interactions or default behaviors beyond what the schema already states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Return the FuelCenter gel catalog' with specific brands listed and data fields enumerated (carbs, price, caffeine, weight, carbs-per-dollar). It distinguishes itself from siblings like 'carb-loading' and 'fueling-plan' by focusing on catalog comparison rather than planning or calculations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'Optionally sort' and the data fields, suggesting this tool is for comparing gel products. However, it lacks explicit guidance on when to use this versus alternatives like 'fueling-plan' or 'carb-loading', nor does it specify prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

heat-adjustmentHeat AdjustmentA
Read-onlyIdempotent
Inspect

Estimate how temperature and humidity will slow your race pace and change your fluid needs. Returns adjusted pace, slowdown percent, dew point, risk level, and fueling note.

ParametersJSON Schema
NameRequiredDescriptionDefault
tempFYesAir temperature at race time in degrees Fahrenheit.
humidityYesRelative humidity as a percentage, 0-100.
distanceMilesYesRace distance in miles.
paceSecondsPerMileYesYour goal pace in seconds per mile (cool-weather pace).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior. The description adds valuable context about what the tool returns (adjusted pace, slowdown percent, dew point, risk level, fueling note), which helps the agent understand output expectations beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the core purpose and efficiently lists return values. Every word contributes to understanding without redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, 100% schema coverage, and rich annotations, the description is mostly complete. However, without an output schema, it could benefit from more detail on return value formats or units, though it does list key outputs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are fully documented in the schema. The description does not add additional parameter meaning beyond what's in the schema, but it implicitly reinforces that inputs relate to race conditions and pace adjustment.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('estimate', 'returns') and resources ('race pace', 'fluid needs'), distinguishing it from siblings like 'pace-calculator' or 'hydration' by focusing on heat/humidity impact rather than general calculations or hydration planning.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context for race planning in hot/humid conditions, but does not explicitly state when to use this tool versus alternatives like 'pace-calculator' or 'hydration'. It provides clear purpose but lacks explicit comparison or exclusion guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hydrationHydration CalculatorA
Read-onlyIdempotent
Inspect

Estimate fluid and sodium needs for a race or long run based on body weight, duration, intensity, temperature, and humidity.

ParametersJSON Schema
NameRequiredDescriptionDefault
tempFYesTemperature in Fahrenheit.
humidityYesRelative humidity, 0-100.
intensityYesEffort level.
bodyWeightLbsYesBody weight in pounds.
durationMinutesYesRun or race duration in minutes.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds context about what is estimated (fluid and sodium needs) but does not disclose additional behavioral traits like rate limits, error conditions, or output format. With annotations providing core behavioral info, this is adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and lists key parameters without redundancy. Every word earns its place, making it easy to parse and understand quickly, with no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, no output schema), annotations cover safety and idempotency, but the description lacks details on output format or how estimates are derived. It is complete enough for basic use but could benefit from more context on results or limitations, especially without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter well-documented in the schema (e.g., 'Temperature in Fahrenheit' for tempF). The description lists the parameters (body weight, duration, etc.) but adds no meaning beyond what the schema provides, such as how they interact or typical ranges. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Estimate fluid and sodium needs') and the resource/context ('for a race or long run'), distinguishing it from sibling tools like 'carb-loading' or 'pace-calculator' that focus on different aspects of athletic preparation. It precisely defines the tool's scope without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'heat-adjustment' or 'fueling-plan', nor does it mention prerequisites or exclusions. It implies usage for hydration estimation but lacks explicit context for tool selection among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pace-calculatorPace CalculatorA
Read-onlyIdempotent
Inspect

Given any two of distance, total time, or pace-per-mile, calculate the third. Returns pace per mile, per km, splits, and speed.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeYesWhich field to solve for.
totalSecondsNoRequired for mode "pace" or "distance".
distanceMilesNoRequired for mode "pace" or "time".
paceSecondsPerMileNoRequired for mode "time" or "distance".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior, which the description does not contradict. The description adds valuable context by specifying the return values ('pace per mile, per km, splits, and speed'), which is not covered by annotations, enhancing transparency about output behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and efficiently adds return details in the second. Every sentence earns its place without redundancy, making it appropriately sized and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations, and full schema coverage, the description is mostly complete. It lacks an output schema, but the description partially compensates by listing return values. However, it could be more detailed about error handling or edge cases for a calculation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds no additional parameter semantics beyond implying the calculation logic, but it does not compensate for any gaps since there are none, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('calculate the third') and resources ('distance, total time, or pace-per-mile'), and it distinguishes itself from siblings by focusing on calculation rather than nutrition, hydration, or prediction tools like carb-loading or race-time-predictor.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly provides usage context by specifying 'Given any two of distance, total time, or pace-per-mile, calculate the third,' which guides when to use it. However, it does not explicitly state when not to use it or name alternatives among siblings, such as race-time-predictor for related but different calculations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

race-time-predictorRace Time PredictorA
Read-onlyIdempotent
Inspect

Predict your finish time at common distances (5K through 100 miles) from a recent race result using the Riegel formula.

ParametersJSON Schema
NameRequiredDescriptionDefault
exponentNoRiegel exponent. Default 1.06.
knownTimeSecondsYesFinish time for that race, in total seconds.
knownDistanceMilesYesDistance of your recent race, in miles.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds useful context by specifying the prediction method (Riegel formula) and the range of distances (5K through 100 miles), which are not covered by annotations. No contradictions with annotations are present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose, scope, and method without unnecessary words. It is front-loaded with the main action and includes all essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations (covering safety and idempotency), and full schema coverage, the description is mostly complete. It lacks output details (no output schema provided) and explicit usage guidelines, but it adequately explains the tool's function and context for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters (knownDistanceMiles, knownTimeSeconds, exponent). The description does not add any parameter-specific details beyond what the schema provides, such as examples or constraints, but it implies the purpose of these inputs for prediction.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Predict your finish time'), the resource ('at common distances (5K through 100 miles)'), and the method ('using the Riegel formula'). It distinguishes from siblings by focusing on race time prediction rather than nutrition, pacing, or environmental adjustments.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('from a recent race result') but does not explicitly state when to use this tool versus alternatives like pace-calculator or other siblings. No exclusions or clear alternatives are mentioned, leaving some ambiguity about tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources