fuelcenter
Server Details
Marathon fueling, pace, hydration, heat, carb-loading, and gel-comparison calculators.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 7 of 7 tools scored.
Each tool has a clearly distinct purpose with no overlap: carb-loading focuses on pre-race nutrition, fueling-plan on in-race fueling, gel-comparison on product data, heat-adjustment on temperature effects, hydration on fluid needs, pace-calculator on pace/time conversions, and race-time-predictor on performance projections. The descriptions make these distinctions explicit, eliminating any ambiguity.
The tools follow a consistent snake_case naming pattern with descriptive compound names (e.g., 'carb-loading', 'fueling-plan'), except for 'gel-comparison' which uses a hyphen instead of underscore, slightly deviating from the others. Overall, the naming is predictable and readable, with minor inconsistencies.
With 7 tools, the server is well-scoped for its endurance sports fueling and planning domain. Each tool serves a specific, non-redundant function, covering key aspects like nutrition, hydration, pacing, and environmental adjustments, making the count appropriate and efficient.
The tool set provides comprehensive coverage for endurance sports planning, including pre-race preparation (carb-loading, race-time-predictor), in-race execution (fueling-plan, hydration, heat-adjustment), and supporting utilities (gel-comparison, pace-calculator). There are no obvious gaps, and agents can handle full workflows from planning to adjustment.
Available Tools
7 toolscarb-loadingCarb Loading CalculatorARead-onlyIdempotentInspect
Calculate pre-race carb loading targets by race type, body weight, finish time, and experience level. Returns daily carb target, protocol length, total carbs, and pre-race meal size.
| Name | Required | Description | Default |
|---|---|---|---|
| raceType | Yes | Race distance category. | |
| experience | Yes | Runner experience level. | |
| hasGIIssues | No | Whether the runner has a sensitive stomach. Default false. | |
| bodyWeightLbs | Yes | Body weight in pounds. | |
| finishTimeHours | Yes | Expected finish time, hours portion. | |
| finishTimeMinutes | Yes | Expected finish time, minutes portion. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as read-only, non-destructive, and idempotent. The description adds valuable context about what the tool returns (daily carb target, protocol length, total carbs, pre-race meal size) which isn't covered by annotations. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence clearly states purpose and inputs, the second specifies outputs. Every word earns its place with no redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a calculation tool with good annotations and complete schema coverage, the description provides adequate context. The lack of output schema is partially compensated by the description specifying return values. However, more detail about calculation methodology or assumptions would enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all parameters thoroughly. The description mentions the parameters generally but doesn't add specific meaning beyond what's in the schema. Baseline 3 is appropriate when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Calculate pre-race carb loading targets') and the resources involved (race type, body weight, finish time, experience level). It distinguishes from sibling tools by focusing specifically on carb loading calculation rather than general fueling, hydration, pace, or race prediction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('pre-race carb loading targets') but doesn't explicitly state when to use this tool versus alternatives like 'fueling-plan' or 'hydration'. No specific exclusions or prerequisites are mentioned, leaving some ambiguity about tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fueling-planFueling PlanARead-onlyIdempotentInspect
Build a minute-by-minute race or long-run fueling plan. Returns target carbs/hour, gel count, gel timing schedule, and total carbs. This is the core FuelCenter tool.
| Name | Required | Description | Default |
|---|---|---|---|
| hours | Yes | Race or run duration, hours portion. | |
| buffer | No | Minutes of buffer at start and end where no gel is taken. Default 5. | |
| gelKey | Yes | Gel product key. See /api/tools/gel-comparison for full catalog. | |
| minutes | Yes | Race or run duration, minutes portion. | |
| drinkMix | No | Whether the runner uses a carbohydrate drink mix. Default false. | |
| intensity | Yes | Effort level. | |
| targetCPH | No | Target carbs per hour. If omitted, default is derived from guideline. | |
| experience | No | Runner experience with fueling. Default "standard". | |
| drinkCarbsTotal | No | Total grams of carbs from drink mix across the run. Default 0. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds minimal behavioral context beyond this, noting it returns specific data types but not detailing rate limits, authentication needs, or computational intensity. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence and efficiently lists outputs in the second, with no wasted words. Every sentence earns its place by clearly conveying the tool's function and scope, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (9 parameters, no output schema) and rich annotations, the description is mostly complete. It clearly states what the tool does and returns, but lacks details on output format or error handling. However, with annotations covering key behavioral traits, it provides sufficient context for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 9 parameters. The description adds no additional parameter semantics beyond implying the tool uses inputs like duration and intensity to generate outputs. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't enhance parameter understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Build a minute-by-minute race or long-run fueling plan') and resources (target carbs/hour, gel count, gel timing schedule, total carbs). It distinguishes itself from siblings by being 'the core FuelCenter tool' for creating comprehensive fueling plans, unlike comparison or adjustment tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for race or long-run fueling planning but provides no explicit guidance on when to use this tool versus alternatives like 'gel-comparison' or 'hydration'. It mentions being the 'core' tool, which suggests primacy, but lacks specific when/when-not instructions or named alternatives for different scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gel-comparisonGel ComparisonARead-onlyIdempotentInspect
Return the FuelCenter gel catalog (SiS Beta Fuel, Maurten, Precision, GU, Tailwind, Spring, Carbs Fuel, neversecond, Huma) with carbs, price, caffeine, weight, and carbs-per-dollar. Optionally sort.
| Name | Required | Description | Default |
|---|---|---|---|
| sortKey | No | Column to sort by. Default "carbsPerDollar". | |
| sortDirection | No | Sort direction. Default "desc". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds value by specifying the catalog scope (brands and fields) and optional sorting behavior, though it doesn't mention rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, efficiently lists brands and data fields, and ends with the optional feature. It uses two concise sentences with zero wasted words, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 optional parameters), rich annotations, and 100% schema coverage, the description is largely complete. It specifies the catalog content and sorting option. However, without an output schema, it could benefit from hinting at the return format (e.g., tabular data).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters (sortKey and sortDirection) fully documented in the schema. The description adds minimal value beyond the schema by mentioning 'Optionally sort', but doesn't explain parameter interactions or default behaviors beyond what the schema already states.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Return the FuelCenter gel catalog' with specific brands listed and data fields enumerated (carbs, price, caffeine, weight, carbs-per-dollar). It distinguishes itself from siblings like 'carb-loading' and 'fueling-plan' by focusing on catalog comparison rather than planning or calculations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'Optionally sort' and the data fields, suggesting this tool is for comparing gel products. However, it lacks explicit guidance on when to use this versus alternatives like 'fueling-plan' or 'carb-loading', nor does it specify prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
heat-adjustmentHeat AdjustmentARead-onlyIdempotentInspect
Estimate how temperature and humidity will slow your race pace and change your fluid needs. Returns adjusted pace, slowdown percent, dew point, risk level, and fueling note.
| Name | Required | Description | Default |
|---|---|---|---|
| tempF | Yes | Air temperature at race time in degrees Fahrenheit. | |
| humidity | Yes | Relative humidity as a percentage, 0-100. | |
| distanceMiles | Yes | Race distance in miles. | |
| paceSecondsPerMile | Yes | Your goal pace in seconds per mile (cool-weather pace). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and idempotent behavior. The description adds valuable context about what the tool returns (adjusted pace, slowdown percent, dew point, risk level, fueling note), which helps the agent understand output expectations beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the core purpose and efficiently lists return values. Every word contributes to understanding without redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, 100% schema coverage, and rich annotations, the description is mostly complete. However, without an output schema, it could benefit from more detail on return value formats or units, though it does list key outputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are fully documented in the schema. The description does not add additional parameter meaning beyond what's in the schema, but it implicitly reinforces that inputs relate to race conditions and pace adjustment.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('estimate', 'returns') and resources ('race pace', 'fluid needs'), distinguishing it from siblings like 'pace-calculator' or 'hydration' by focusing on heat/humidity impact rather than general calculations or hydration planning.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context for race planning in hot/humid conditions, but does not explicitly state when to use this tool versus alternatives like 'pace-calculator' or 'hydration'. It provides clear purpose but lacks explicit comparison or exclusion guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hydrationHydration CalculatorARead-onlyIdempotentInspect
Estimate fluid and sodium needs for a race or long run based on body weight, duration, intensity, temperature, and humidity.
| Name | Required | Description | Default |
|---|---|---|---|
| tempF | Yes | Temperature in Fahrenheit. | |
| humidity | Yes | Relative humidity, 0-100. | |
| intensity | Yes | Effort level. | |
| bodyWeightLbs | Yes | Body weight in pounds. | |
| durationMinutes | Yes | Run or race duration in minutes. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds context about what is estimated (fluid and sodium needs) but does not disclose additional behavioral traits like rate limits, error conditions, or output format. With annotations providing core behavioral info, this is adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and lists key parameters without redundancy. Every word earns its place, making it easy to parse and understand quickly, with no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (5 parameters, no output schema), annotations cover safety and idempotency, but the description lacks details on output format or how estimates are derived. It is complete enough for basic use but could benefit from more context on results or limitations, especially without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with each parameter well-documented in the schema (e.g., 'Temperature in Fahrenheit' for tempF). The description lists the parameters (body weight, duration, etc.) but adds no meaning beyond what the schema provides, such as how they interact or typical ranges. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Estimate fluid and sodium needs') and the resource/context ('for a race or long run'), distinguishing it from sibling tools like 'carb-loading' or 'pace-calculator' that focus on different aspects of athletic preparation. It precisely defines the tool's scope without being vague or tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'heat-adjustment' or 'fueling-plan', nor does it mention prerequisites or exclusions. It implies usage for hydration estimation but lacks explicit context for tool selection among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pace-calculatorPace CalculatorARead-onlyIdempotentInspect
Given any two of distance, total time, or pace-per-mile, calculate the third. Returns pace per mile, per km, splits, and speed.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | Yes | Which field to solve for. | |
| totalSeconds | No | Required for mode "pace" or "distance". | |
| distanceMiles | No | Required for mode "pace" or "time". | |
| paceSecondsPerMile | No | Required for mode "time" or "distance". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and idempotent behavior, which the description does not contradict. The description adds valuable context by specifying the return values ('pace per mile, per km, splits, and speed'), which is not covered by annotations, enhancing transparency about output behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence and efficiently adds return details in the second. Every sentence earns its place without redundancy, making it appropriately sized and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations, and full schema coverage, the description is mostly complete. It lacks an output schema, but the description partially compensates by listing return values. However, it could be more detailed about error handling or edge cases for a calculation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description adds no additional parameter semantics beyond implying the calculation logic, but it does not compensate for any gaps since there are none, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('calculate the third') and resources ('distance, total time, or pace-per-mile'), and it distinguishes itself from siblings by focusing on calculation rather than nutrition, hydration, or prediction tools like carb-loading or race-time-predictor.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly provides usage context by specifying 'Given any two of distance, total time, or pace-per-mile, calculate the third,' which guides when to use it. However, it does not explicitly state when not to use it or name alternatives among siblings, such as race-time-predictor for related but different calculations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
race-time-predictorRace Time PredictorARead-onlyIdempotentInspect
Predict your finish time at common distances (5K through 100 miles) from a recent race result using the Riegel formula.
| Name | Required | Description | Default |
|---|---|---|---|
| exponent | No | Riegel exponent. Default 1.06. | |
| knownTimeSeconds | Yes | Finish time for that race, in total seconds. | |
| knownDistanceMiles | Yes | Distance of your recent race, in miles. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds useful context by specifying the prediction method (Riegel formula) and the range of distances (5K through 100 miles), which are not covered by annotations. No contradictions with annotations are present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose, scope, and method without unnecessary words. It is front-loaded with the main action and includes all essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (covering safety and idempotency), and full schema coverage, the description is mostly complete. It lacks output details (no output schema provided) and explicit usage guidelines, but it adequately explains the tool's function and context for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters (knownDistanceMiles, knownTimeSeconds, exponent). The description does not add any parameter-specific details beyond what the schema provides, such as examples or constraints, but it implies the purpose of these inputs for prediction.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Predict your finish time'), the resource ('at common distances (5K through 100 miles)'), and the method ('using the Riegel formula'). It distinguishes from siblings by focusing on race time prediction rather than nutrition, pacing, or environmental adjustments.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('from a recent race result') but does not explicitly state when to use this tool versus alternatives like pace-calculator or other siblings. No exclusions or clear alternatives are mentioned, leaving some ambiguity about tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!