Server Details
Etapa is an AI cycling coach for beginners. The MCP exposes two tools: `generate_training_plan` (2-4 week plans powered by the Etapa API) and `cycling_beginner_guide` (advice on bikes, gear, safety). Free, no account.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose with no overlap: ask_cycling_coach handles open-ended Q&A, cycling_beginner_guide provides static beginner content, generate_training_plan creates new plans, and review_cycling_plan critiques existing plans. The descriptions explicitly differentiate their use cases, eliminating any ambiguity.
All tool names follow a consistent verb_noun pattern (e.g., ask_cycling_coach, generate_training_plan) with clear, descriptive terms. There are no deviations in style or convention, making the set predictable and easy to understand at a glance.
With 4 tools, this server is well-scoped for its purpose of cycling coaching and training guidance. Each tool serves a unique function in the domain, from advice to plan generation and review, without being too sparse or overwhelming, making the count ideal for the scope.
The tool set covers key aspects of cycling coaching: Q&A, beginner guidance, plan creation, and plan review. Minor gaps exist, such as no direct tools for tracking progress or adjusting plans over time, but agents can work around this using the existing tools for core workflows.
Available Tools
4 toolsask_cycling_coachAsk the Etapa cycling coachAInspect
Ask the Etapa cycling coach any question about cycling, training, plan adjustments, recovery, nutrition, gear, or technique. Answers are in plain English — no jargon, beginner-friendly, and grounded in established training science. Use this for open-ended questions, plan adaptations ("I missed a ride, what now?"), or when the rider wants an opinion. Powered by the Etapa API.
| Name | Required | Description | Default |
|---|---|---|---|
| context | No | Optional background about the rider (fitness level, goal, schedule, recent riding). Helps the coach tailor the answer. | |
| planText | No | Optional — the rider's current training plan, pasted as text. Use this if the question is about a specific plan. | |
| question | Yes | The rider's question. Can be about anything cycling-related. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: answers are in plain English, beginner-friendly, grounded in training science, and powered by the Etapa API. However, it doesn't mention rate limits, authentication needs, or response format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage guidelines and behavioral details. Every sentence adds value without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does a good job covering purpose, usage, and behavioral context. However, it could better explain the response format or limitations, as the output is unspecified. It's mostly complete for a conversational tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description doesn't add specific meaning beyond what the schema provides, such as examples of how to phrase questions or use context effectively. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: ask a cycling coach questions about cycling topics, with specific examples like training, nutrition, and plan adjustments. It distinguishes from siblings by emphasizing open-ended questions and opinion-seeking rather than generating plans or guides.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: for open-ended questions, plan adaptations, or when the rider wants an opinion. It distinguishes from siblings by not being for generating plans or guides, providing clear alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cycling_beginner_guideCycling beginner guideAInspect
Get generic beginner-friendly cycling guidance on topics like choosing your first bike, essential gear, your first ride, nutrition, safety on the road, bike fit, and building a habit. Call without a topic to see the full index. Content is curated — no API call is made.
| Name | Required | Description | Default |
|---|---|---|---|
| topic | No | The topic slug. Omit to see the full index of available topics. Valid topics: getting_started, first_bike, essential_gear, first_ride, nutrition_and_hydration, safety, building_a_habit, bike_fit, common_mistakes. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a read-only operation (provides 'guidance'), the content is 'curated' (not dynamically generated), and 'no API call is made' (suggesting local/static content). It doesn't mention rate limits, authentication needs, or destructive actions, which is appropriate for this type of tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the purpose and scope, the second provides important behavioral context. Every phrase adds value with zero wasted words, and key information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with one optional parameter and 100% schema coverage, the description is reasonably complete. It explains what the tool does, how to use it (with/without topic), and key behavioral characteristics. The main gap is lack of output format information (no output schema exists), but for this type of guidance tool, the description provides adequate context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing a comprehensive enum list and description. The description adds minimal value beyond the schema by mentioning 'Call without a topic to see the full index' - which is already implied by the schema's 'Omit to see the full index' text. No additional parameter semantics are provided beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get generic beginner-friendly cycling guidance on topics like...' It specifies the action (get guidance) and resource (cycling topics). However, it doesn't explicitly differentiate from the sibling tool 'generate_training_plan' - both could be related to cycling advice but serve different functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context: 'Call without a topic to see the full index' and mentions the tool provides 'curated' content with 'no API call made.' However, it doesn't explicitly state when to use this tool versus the sibling 'generate_training_plan' tool, nor does it provide clear exclusions or alternatives for specific scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_training_planGenerate a cycling training planAInspect
Generate a 2-4 week cycling training plan using the Etapa API (getetapa.com). The plan is tailored to the rider's goal, fitness level, and available days. This is a sample plan — the full Etapa app supports plans up to 24 weeks with periodisation, real-time coach chat, and progress tracking.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | Any extra context — injuries, preferences, schedule constraints. | |
| weeks | No | Length of the sample plan in weeks. Capped at 4 — use the Etapa app for longer plans. | |
| goalType | No | What the rider wants to achieve. Examples: "complete a 50km sportive", "get fitter on two wheels", "ride to work twice a week", "first 100 miles". Free text. | |
| daysPerWeek | No | Days per week the rider can train. Defaults to 3. | |
| fitnessLevel | No | Rider fitness level. Defaults to beginner. | |
| indoorTrainer | No | Whether the rider has an indoor trainer. | |
| targetDistanceKm | No | Target distance in km if the goal is distance-based (e.g. an event). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses that this generates a 'sample plan' and mentions the full app's extended features, but lacks details on behavioral traits like rate limits, authentication needs, or what the output looks like. It doesn't contradict annotations (none provided).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences: the first states the purpose and key parameters, the second adds context about sample vs. full plans. It's front-loaded with essential information and avoids unnecessary details, though the second sentence could be slightly more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters with full schema coverage but no annotations or output schema, the description is adequate but has gaps. It covers the tool's purpose and sample nature, but doesn't explain what the generated plan includes (e.g., structure, format) or address potential limitations, which is important for a generative tool with no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description adds minimal value beyond the schema by implying parameters like goal, fitness level, and days are used for tailoring, but doesn't provide additional semantic context or examples not in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates a cycling training plan using the Etapa API, specifying it's tailored to rider attributes like goal, fitness level, and available days. It distinguishes from the sibling 'cycling_beginner_guide' by focusing on plan generation rather than guidance, though it doesn't explicitly contrast them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for generating 2-4 week sample plans based on rider inputs. It mentions the Etapa app as an alternative for longer plans, but doesn't specify when not to use it or compare directly with the sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
review_cycling_planReview a cycling training planAInspect
Give the Etapa cycling coach any training plan (from another app, a book, a YouTube video, a coach, or anywhere else) and get an honest critique in four sections: what's working, what's missing or risky, what to change, and a bottom-line verdict. Use this when the rider wants a second opinion on a plan they already have. Powered by the Etapa API.
| Name | Required | Description | Default |
|---|---|---|---|
| goal | No | What the rider is training for. E.g. "first 100 km sportive", "commute twice a week", "get fitter". | |
| plan | Yes | The training plan as text — sessions, weeks, distances, whatever the rider has. Paste it in as-is. | |
| fitnessLevel | No | Rider's current fitness level, if known. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses the tool's behavior: it outputs a critique in four specific sections and is 'powered by the Etapa API,' which hints at external processing. However, it lacks details on rate limits, authentication needs, error handling, or response format, leaving gaps for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with the core purpose, followed by usage context and technical details. Every sentence earns its place: the first defines the tool, the second specifies when to use, and the third notes the API source. No wasted words, appropriately sized for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is fairly complete for a tool with 3 parameters and 100% schema coverage. It covers purpose, usage, and high-level behavior, but lacks details on output format (e.g., structure of the critique sections) and operational constraints like rate limits, which would enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters well. The description adds minimal value beyond the schema by mentioning 'training plan as text' and examples of sources, but doesn't provide additional syntax or format details. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to provide a structured critique of a cycling training plan in four specific sections (what's working, missing/risky, what to change, verdict). It distinguishes from siblings by focusing on reviewing existing plans rather than generating new ones (generate_training_plan), asking general questions (ask_cycling_coach), or providing beginner guidance (cycling_beginner_guide).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'when the rider wants a second opinion on a plan they already have.' It also implies when not to use (e.g., for creating new plans, which is covered by generate_training_plan) and lists alternative sources of plans (other apps, books, etc.), providing clear context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!