sailing
Server Details
First dedicated sailing MCP — America's Cup, SailGP, Vendée Globe. Events, venues, teams, schedule.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 4 of 4 tools scored.
Each tool targets a distinct entity: single event, event calendar, teams, venues. No overlap in functionality; descriptions clearly differentiate them.
All tools follow a consistent 'whensport_sailing_get[Entity]' pattern, using snake_case and clear verb-noun structure. Perfectly uniform.
4 tools is appropriate for a focused sailing event data server, covering the main query needs without excess or deficiency.
Covers core read operations (list events, get event, list teams, list venues), but lacks filtering or search tools. Minor gap for a data provider.
Available Tools
4 toolswhensport_sailing_getEventGet a single sailing event by slugARead-onlyInspect
Get a single sailing event by slug (e.g. 'sailgp-perth', 'americas-cup-cagliari', 'tp52-puerto-portals'). Slugs are series-prefixed and bare (no year suffix). Result coverage rule applies: SailGP rounds backfilled, multi-class regattas may be null until prize-giving publishes.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and closed world. Description adds valuable details about slug format and data availability constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states purpose with examples, second adds behavioral nuance. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers slug format and result coverage adequately. Lacks mention of error handling or return structure, but for a simple single-param tool with no output schema, it's sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only parameter is slug; description explains format (series-prefixed, bare, no year suffix) with examples, highly informative. Schema coverage is 0%, so description fully compensates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get a single sailing event by slug' with specific examples. Distinguishes from sibling tools like getEvents (plural) and getTeams/getVenues.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context on result coverage (backfilled, multi-class may be null). Does not explicitly state when to use vs getEvents, but slug-based retrieval is implicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whensport_sailing_getEventsGet the sailing/yachting event calendarARead-onlyInspect
Get the sailing/yachting event calendar — America's Cup, SailGP, Vendée Globe, etc. Result coverage rule: SailGP rounds and headline regattas with same-day broadcast results are backfilled when complete. Multi-class regattas (Antigua Sailing Week, Cowes Week, Copa del Rey, etc.) may have null result until the prize-giving publishes to a canonical source — consult sailgp.com / sailingweek.com / cowesweek.co.uk / americascup.com directly for those.
| Name | Required | Description | Default |
|---|---|---|---|
| upcomingOnly | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark readOnlyHint=true. Description adds value by detailing result timing (backfilled after broadcast, null until prize-giving) beyond annotation disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two-sentence description that is front-loaded with core purpose and efficiently adds necessary context about result coverage without excess verbiage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so return format is unclear. Parameter description missing. However, for a read-only calendar listing, the description provides sufficient behavioral context for typical use, albeit with gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has one parameter (upcomingOnly) with 0% description coverage. Description does not explain this parameter's meaning or usage, leaving the agent to infer from name alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool retrieves the sailing/yachting event calendar with specific examples (America's Cup, SailGP, etc.). Distinguished from siblings as the plural 'Events' vs singular 'getEvent' and other related tools (getTeams, getVenues).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides guidance on result coverage rules and when results may be null, directing users to external sources for definitive results. Implicitly advises when not to rely on this tool for certain regattas.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whensport_sailing_getTeamsGet the sailing competitor list — SailGP teams, America's Cup syndicates, and offshore sailors (IMOCA)ARead-onlyInspect
Get the sailing competitor list — SailGP teams, America's Cup syndicates, and notable individual offshore sailors (IMOCA / Vendée Globe class). Each entry has a type field with value 'team' or 'individual' for explicit filtering.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds value beyond annotations by revealing that each entry has a 'type' field for filtering, which is not indicated by the readOnlyHint or openWorldHint annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences, front-loaded with the main purpose, and no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description is complete enough, covering the tool's purpose and a key attribute of the response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters and 100% schema coverage, the description does not need to explain parameters, but it provides context on the output structure, which is sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The title and description clearly specify the tool retrieves a list of sailing competitors, explicitly naming SailGP teams, America's Cup syndicates, and IMOCA offshore sailors, and distinguishes it from sibling tools that focus on events or venues.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing competitors but provides no explicit guidance on when to use this tool versus alternatives, nor any exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whensport_sailing_getVenuesGet the sailing venue listARead-onlyInspect
Get the sailing venue list.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark the tool as read-only (readOnlyHint: true). The description adds no further behavioral context (e.g., rate limits, caching, availability). The bar is lowered due to annotations, but there is no extra value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no wasted words. Every word is necessary and contributes to clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description lacks details about the return format (e.g., fields, structure). For a list retrieval tool, this omission may hinder the agent's understanding of what data it will receive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist (0 params), so baseline is 4. The description does not need to add parameter semantics, and the schema coverage is 100% by default.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and the resource ('the sailing venue list'). It distinguishes itself from sibling tools like 'getEvents' and 'getTeams' by specifying venues.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context, or exclusions, leaving the agent to infer usage from the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!