Skip to main content
Glama

Etapa — Cycling Coach MCP

Server Details

AI cycling coach — training plans and beginner guidance via the Etapa API.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
rhoneybul/etapa
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: ask_cycling_coach handles open-ended Q&A, cycling_beginner_guide provides static educational content, generate_training_plan creates new plans, and review_cycling_plan critiques existing plans. There is no overlap in functionality, making tool selection straightforward for an agent.

Naming Consistency4/5

The tools follow a consistent snake_case naming convention with descriptive verb_noun patterns (e.g., ask_cycling_coach, generate_training_plan). However, cycling_beginner_guide uses an adjective_noun_noun structure, which is a minor deviation from the otherwise uniform style.

Tool Count5/5

With 4 tools, the server is well-scoped for its purpose as a cycling coach assistant. Each tool addresses a core aspect of the domain—guidance, plan generation, plan review, and Q&A—without being overly sparse or bloated, making the set manageable and purposeful.

Completeness4/5

The toolset covers key cycling coaching needs: education (beginner_guide), plan creation (generate_training_plan), plan evaluation (review_cycling_plan), and adaptive support (ask_cycling_coach). A minor gap is the lack of tools for tracking progress or modifying existing plans, but agents can work around this using the Q&A tool for adjustments.

Available Tools

4 tools
ask_cycling_coachAsk the Etapa cycling coachAInspect

Ask the Etapa cycling coach any question about cycling, training, plan adjustments, recovery, nutrition, gear, or technique. Answers are in plain English — no jargon, beginner-friendly, and grounded in established training science. Use this for open-ended questions, plan adaptations ("I missed a ride, what now?"), or when the rider wants an opinion. Powered by the Etapa API.

ParametersJSON Schema
NameRequiredDescriptionDefault
contextNoOptional background about the rider (fitness level, goal, schedule, recent riding). Helps the coach tailor the answer.
planTextNoOptional — the rider's current training plan, pasted as text. Use this if the question is about a specific plan.
questionYesThe rider's question. Can be about anything cycling-related.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses key behavioral traits: answers are in plain English (no jargon), beginner-friendly, grounded in training science, and powered by the Etapa API. However, it doesn't mention rate limits, authentication needs, or response format details, leaving some gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured and front-loaded: first sentence states the core purpose, second adds key behavioral traits, third provides usage guidelines. Every sentence adds value with zero waste, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does well by covering purpose, usage, and behavioral context. However, it doesn't describe the response format or potential limitations (e.g., answer length, confidence levels), which would be helpful for a Q&A tool with no structured output documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema, maintaining the baseline score of 3 for adequate but not enhanced coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: ask questions about cycling topics (training, nutrition, gear, etc.) to get plain English, beginner-friendly answers grounded in training science. It specifies the resource (Etapa cycling coach) and distinguishes from siblings by focusing on open-ended Q&A rather than plan generation or review.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: for open-ended questions, plan adaptations (e.g., 'I missed a ride'), or when the rider wants an opinion. It distinguishes from siblings by positioning this as the tool for Q&A rather than plan creation (generate_training_plan) or review (review_cycling_plan).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cycling_beginner_guideCycling beginner guideAInspect

Get generic beginner-friendly cycling guidance on topics like choosing your first bike, essential gear, your first ride, nutrition, safety on the road, bike fit, and building a habit. Call without a topic to see the full index. Content is curated — no API call is made.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicNoThe topic slug. Omit to see the full index of available topics. Valid topics: getting_started, first_bike, essential_gear, first_ride, nutrition_and_hydration, safety, building_a_habit, bike_fit, common_mistakes.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and adds valuable behavioral context: it discloses that 'Content is curated — no API call is made', indicating this is a static resource without external data fetching. This clarifies the tool's operational nature beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and key topics, then adding operational details. Every sentence earns its place by providing essential information without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one optional parameter, no output schema), the description is complete enough for a guidance tool. It covers purpose, topics, and behavioral traits like curated content. However, it could slightly improve by hinting at output format or more explicit sibling differentiation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'topic' fully documented in the schema including enum values and behavior when omitted. The description adds minimal value beyond the schema by mentioning 'Call without a topic to see the full index', which is already implied in the schema. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'generic beginner-friendly cycling guidance' on specific topics, with a verb ('Get') and resource ('guidance'). It distinguishes from sibling tools like 'ask_cycling_coach' by emphasizing curated content rather than interactive coaching, though it could be more explicit about the distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning 'Call without a topic to see the full index' and listing topics, but it doesn't explicitly state when to use this tool versus alternatives like 'ask_cycling_coach' or 'generate_training_plan'. It provides some context but lacks clear exclusions or comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_training_planGenerate a cycling training planAInspect

Generate a 2-4 week cycling training plan using the Etapa API (getetapa.com). The plan is tailored to the rider's goal, fitness level, and available days. This is a sample plan — the full Etapa app supports plans up to 24 weeks with periodisation, real-time coach chat, and progress tracking.

ParametersJSON Schema
NameRequiredDescriptionDefault
notesNoAny extra context — injuries, preferences, schedule constraints.
weeksNoLength of the sample plan in weeks. Capped at 4 — use the Etapa app for longer plans.
goalTypeNoWhat the rider wants to achieve. Examples: "complete a 50km sportive", "get fitter on two wheels", "ride to work twice a week", "first 100 miles". Free text.
daysPerWeekNoDays per week the rider can train. Defaults to 3.
fitnessLevelNoRider fitness level. Defaults to beginner.
indoorTrainerNoWhether the rider has an indoor trainer.
targetDistanceKmNoTarget distance in km if the goal is distance-based (e.g. an event).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses that this generates a 'sample plan' and mentions the full app's extended features, but lacks details on permissions, rate limits, or what the output looks like. It adds some behavioral context but leaves gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and efficiently adds context about sample vs. full plans in two sentences. It avoids redundancy, though the second sentence could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 7-parameter tool with no annotations and no output schema, the description provides adequate purpose and context but lacks details on behavioral traits and output format. It's minimally viable but has clear gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 7 parameters. The description doesn't add specific parameter details beyond implying tailoring to goals, fitness, and days, which is already covered in schema descriptions. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates a cycling training plan using the Etapa API, specifying it's tailored to rider goals, fitness level, and available days. It distinguishes from siblings by focusing on plan generation rather than coaching, guides, or reviews.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (generating 2-4 week sample plans) and mentions the Etapa app for longer plans, but doesn't explicitly contrast with sibling tools like 'ask_cycling_coach' or 'review_cycling_plan'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

review_cycling_planReview a cycling training planAInspect

Give the Etapa cycling coach any training plan (from another app, a book, a YouTube video, a coach, or anywhere else) and get an honest critique in four sections: what's working, what's missing or risky, what to change, and a bottom-line verdict. Use this when the rider wants a second opinion on a plan they already have. Powered by the Etapa API.

ParametersJSON Schema
NameRequiredDescriptionDefault
goalNoWhat the rider is training for. E.g. "first 100 km sportive", "commute twice a week", "get fitter".
planYesThe training plan as text — sessions, weeks, distances, whatever the rider has. Paste it in as-is.
fitnessLevelNoRider's current fitness level, if known.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the four-section critique format and mentions it's 'powered by the Etapa API,' but doesn't specify behavioral traits like rate limits, authentication needs, response format, or whether the critique is saved/persisted. The description is adequate but lacks operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences: first explains the core functionality and four-section format, second provides usage context and API mention. Every element serves a purpose with no redundancy, making it efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description does well by clarifying purpose, usage, and parameter handling. However, it doesn't detail the output structure (e.g., format of the four sections) or potential errors, leaving some gaps in operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents parameters. The description adds value by explaining the 'plan' parameter accepts text from various sources (apps, books, videos, coaches) and should be 'pasted in as-is,' providing practical context beyond the schema's technical specifications.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to critique a cycling training plan in four specific sections (what's working, missing/risky, what to change, bottom-line verdict). It distinguishes from siblings by specifying this is for reviewing existing plans from various sources, unlike 'generate_training_plan' (creation) or 'ask_cycling_coach' (general advice).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'when the rider wants a second opinion on a plan they already have.' This clearly differentiates from siblings like 'generate_training_plan' (for creating new plans) and 'ask_cycling_coach' (for general coaching questions).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.