Skip to main content
Glama

sailing

Server Details

First dedicated sailing MCP — America's Cup, SailGP, Vendée Globe. Events, venues, teams, schedule.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct entity: single event, event calendar, teams, venues. No overlap in functionality; descriptions clearly differentiate them.

Naming Consistency5/5

All tools follow a consistent 'whensport_sailing_get[Entity]' pattern, using snake_case and clear verb-noun structure. Perfectly uniform.

Tool Count5/5

4 tools is appropriate for a focused sailing event data server, covering the main query needs without excess or deficiency.

Completeness4/5

Covers core read operations (list events, get event, list teams, list venues), but lacks filtering or search tools. Minor gap for a data provider.

Available Tools

4 tools
whensport_sailing_getEventGet a single sailing event by slugA
Read-only
Inspect

Get a single sailing event by slug (e.g. 'sailgp-perth', 'americas-cup-cagliari', 'tp52-puerto-portals'). Slugs are series-prefixed and bare (no year suffix). Result coverage rule applies: SailGP rounds backfilled, multi-class regattas may be null until prize-giving publishes.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and closed world. Description adds valuable details about slug format and data availability constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states purpose with examples, second adds behavioral nuance. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers slug format and result coverage adequately. Lacks mention of error handling or return structure, but for a simple single-param tool with no output schema, it's sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only parameter is slug; description explains format (series-prefixed, bare, no year suffix) with examples, highly informative. Schema coverage is 0%, so description fully compensates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Get a single sailing event by slug' with specific examples. Distinguishes from sibling tools like getEvents (plural) and getTeams/getVenues.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides context on result coverage (backfilled, multi-class may be null). Does not explicitly state when to use vs getEvents, but slug-based retrieval is implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_sailing_getEventsGet the sailing/yachting event calendarA
Read-only
Inspect

Get the sailing/yachting event calendar — America's Cup, SailGP, Vendée Globe, etc. Result coverage rule: SailGP rounds and headline regattas with same-day broadcast results are backfilled when complete. Multi-class regattas (Antigua Sailing Week, Cowes Week, Copa del Rey, etc.) may have null result until the prize-giving publishes to a canonical source — consult sailgp.com / sailingweek.com / cowesweek.co.uk / americascup.com directly for those.

ParametersJSON Schema
NameRequiredDescriptionDefault
upcomingOnlyNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark readOnlyHint=true. Description adds value by detailing result timing (backfilled after broadcast, null until prize-giving) beyond annotation disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two-sentence description that is front-loaded with core purpose and efficiently adds necessary context about result coverage without excess verbiage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so return format is unclear. Parameter description missing. However, for a read-only calendar listing, the description provides sufficient behavioral context for typical use, albeit with gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has one parameter (upcomingOnly) with 0% description coverage. Description does not explain this parameter's meaning or usage, leaving the agent to infer from name alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves the sailing/yachting event calendar with specific examples (America's Cup, SailGP, etc.). Distinguished from siblings as the plural 'Events' vs singular 'getEvent' and other related tools (getTeams, getVenues).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides guidance on result coverage rules and when results may be null, directing users to external sources for definitive results. Implicitly advises when not to rely on this tool for certain regattas.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_sailing_getTeamsGet the sailing competitor list — SailGP teams, America's Cup syndicates, and offshore sailors (IMOCA)A
Read-only
Inspect

Get the sailing competitor list — SailGP teams, America's Cup syndicates, and notable individual offshore sailors (IMOCA / Vendée Globe class). Each entry has a type field with value 'team' or 'individual' for explicit filtering.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds value beyond annotations by revealing that each entry has a 'type' field for filtering, which is not indicated by the readOnlyHint or openWorldHint annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences, front-loaded with the main purpose, and no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and no output schema, the description is complete enough, covering the tool's purpose and a key attribute of the response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters and 100% schema coverage, the description does not need to explain parameters, but it provides context on the output structure, which is sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The title and description clearly specify the tool retrieves a list of sailing competitors, explicitly naming SailGP teams, America's Cup syndicates, and IMOCA offshore sailors, and distinguishes it from sibling tools that focus on events or venues.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for listing competitors but provides no explicit guidance on when to use this tool versus alternatives, nor any exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_sailing_getVenuesGet the sailing venue listA
Read-only
Inspect

Get the sailing venue list.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark the tool as read-only (readOnlyHint: true). The description adds no further behavioral context (e.g., rate limits, caching, availability). The bar is lowered due to annotations, but there is no extra value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no wasted words. Every word is necessary and contributes to clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description lacks details about the return format (e.g., fields, structure). For a list retrieval tool, this omission may hinder the agent's understanding of what data it will receive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist (0 params), so baseline is 4. The description does not need to add parameter semantics, and the schema coverage is 100% by default.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and the resource ('the sailing venue list'). It distinguishes itself from sibling tools like 'getEvents' and 'getTeams' by specifying venues.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context, or exclusions, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources