Skip to main content
Glama

Server Details

10-sport calendar MCP — football, F1, tennis, cricket, rugby, golf, polo, sailing, horse racing.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 48 of 48 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool is prefixed with the sport name (e.g., whensport_cricket_, whensport_f1_), making them easily distinguishable even though similar patterns appear across sports. No two tools share the same purpose.

Naming Consistency5/5

All tools follow the strict pattern whensport_<sport>_<action> with consistent camelCase action names (e.g., getMatch, getTournaments). No deviations or mixed conventions.

Tool Count4/5

48 tools is high but justified by covering 9 distinct sports, each with 3-7 focused tools. The count is not excessive given the broad scope, though a few sports could be streamlined.

Completeness4/5

Provides core schedule-related operations for each sport (matches, tournaments, teams/players, venues), with some sports like TDF having deeper stage tools. Minor gaps exist (e.g., tennis lacks Masters events, sailing lacks some regattas), but the stated schedule focus is well served.

Available Tools

48 tools
whensport_cricket_getMatchGet a single cricket match by codeA
Read-only
Inspect

Get a single cricket match by match code (e.g. "t20wc-1" for T20 World Cup match 1, "ipl-2026-1" for IPL match 1). Code is in the match / matchCode field of getMatches output. Note: this MCP is schedule-focused; score/result on completed matches may be null pending ingestion — consult espncricinfo.com for confirmed scorecards.

ParametersJSON Schema
NameRequiredDescriptionDefault
matchCodeYesMatch identifier — value of the `match` field in getMatches output (e.g. t20wc-1).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark readOnlyHint=true; description adds that scores/result may be null pending ingestion, which is crucial behavioral context beyond annotations. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences: purpose with examples, source of match code, and important caveat about null scores. No superfluous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description covers purpose, input source, and a behavioral caveat. Could mention expected fields like teams/venue/date, but for a single-param read-only tool, it's fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with param description. Description adds practical examples and explains how to obtain the code (from getMatches output), significantly enhancing semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get a single cricket match by match code' with concrete examples (t20wc-1, ipl-2026-1). Distinguishes from sibling getMatches by noting that match codes come from getMatches output and that it is schedule-focused.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implicitly guides usage by stating that the match code comes from getMatches output, hinting at a two-step process. Also advises consulting espncricinfo.com for confirmed scorecards when null scores are not acceptable. Does not explicitly compare to sibling whensport_getMatch.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_cricket_getMatchesGet cricket matches, optionally filtered by tournament or teamA
Read-only
Inspect

Get cricket matches. Returns date, venue, format, status. Filtering is strongly recommended — the unfiltered match set is large. Default limit is 50 (max 200); when more matches exist the response includes truncated: true, moreCount, and nextOffset. Page through results by passing the response's nextOffset back as offset on the next call.

ParametersJSON Schema
NameRequiredDescriptionDefault
teamNoTeam abbreviation e.g. 'IND', 'AUS', 'ENG'.
limitNoMaximum matches to return. Default 50, max 200.
offsetNoSkip the first N matches; use the response's `nextOffset` for paging.
tournamentNoTournament slug e.g. 't20-world-cup-2026', 'ipl-2026'. Strongly recommended.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond readOnlyHint annotation, describes return fields (date, venue, format, status) and pagination behavior including truncated flag, moreCount, and nextOffset for paging.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first clearly states purpose, second adds essential usage guidance and pagination details. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no output schema, describes key return fields and pagination. Lacks details on sorting or default ordering, but is otherwise complete for agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds value by recommending filtering and explaining pagination pattern, which aids in effective parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves cricket matches with optional filtering by tournament or team, distinguishing it from siblings like whensport_cricket_getMatch which returns a single match.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit recommendations for filtering due to large dataset and details pagination mechanics (limit, offset, nextOffset). Lacks explicit contrast with sibling tools beyond the name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_cricket_getPlayersGet the cricket player rosterA
Read-only
Inspect

Get the curated marquee cricket player roster covered by whensport. Not a comprehensive roster — focused on high-profile players for triggering / disambiguation.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true. The description adds valuable context: the roster is curated and not exhaustive, which clarifies the tool's scope and limitations beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the purpose and scope. Every word adds value, with no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless tool with safety annotations, the description adequately explains what the tool returns and its limitations. However, it omits details about the output format, which could be helpful given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so baseline is 4. The description does not need to add parameter details, and it succinctly explains the tool's purpose without relying on parameter info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly uses the verb 'Get' and specifies the resource 'curated marquee cricket player roster'. It distinguishes itself from a comprehensive roster, aligning with sibling tools for other sports.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It states the roster is 'not comprehensive' and 'focused on high-profile players for triggering / disambiguation', giving implicit guidance on when to use it. However, it does not explicitly exclude other scenarios or mention alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_cricket_getTeamswhensport_cricket_getTeamsA
Read-only
Inspect

Get the cricket teams in scope, optionally scoped to a tournament. Each team has a code (e.g. "IND", "AUS", "NZ", "WI", "SA") and full name. Use this to enumerate valid team values for cricket_getMatches — codes vary by sport (cricket uses 2-3 letter ISO-style codes; rugby uses different forms).

ParametersJSON Schema
NameRequiredDescriptionDefault
tournamentNoOptional tournament slug to scope the team list (e.g. "t20-world-cup-2026", "ipl-2026"). When omitted, returns every team that appears in any tracked match.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true. The description adds value by explaining the return structure (code and full name) and noting the code format difference from rugby, which aids the agent in understanding the output's nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each earning its place: first sentence states main purpose, second provides example codes, third explains downstream use. No wasted words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple, one optional parameter) and the presence of many sibling tools, the description sufficiently completes the picture. It explains output format and use case, though it could mention pagination or ordering if applicable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, but the description adds useful context: the parameter is an optional tournament slug, explains its effect when provided, and what happens when omitted. This enhances the agent's understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool gets cricket teams, optionally scoped to a tournament. It distinguishes from sibling tools by specifying that team codes vary by sport (e.g., cricket uses ISO-style codes) and that this tool provides valid 'team' values for cricket_getMatches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use: to enumerate valid team values for cricket_getMatches, and provides context that codes differ by sport. It implicitly tells the agent not to use this for other sports' tools, though it does not explicitly list alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_cricket_getTournamentGet a single cricket tournament by slugA
Read-only
Inspect

Get a single cricket tournament by slug (e.g. 't20-world-cup-2026').

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint=true) indicate it's a safe read operation. The description adds a useful example slug format but no additional behavioral details such as authentication needs, rate limits, or potential errors. With annotations present, the description meets a baseline but doesn't enhance transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, complete sentence with no wasted words. It is front-loaded with the verb and resource, followed by the parameter usage example. Every word serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is a simple lookup with one parameter, read-only annotation, and no output schema, the description adequately covers the core functionality. It tells the agent what the tool does and how to call it. It omits return value details, but for a straightforward retrieval tool this is acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema's single 'slug' parameter has 0% description coverage, but the description provides an example ('t20-world-cup-2026') that clarifies the expected format. However, it does not explain the purpose or constraints of the slug (e.g., uniqueness, required length). The example partially compensates for missing schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action (Get), the resource (a single cricket tournament), and the method (by slug). Distinguishes itself from sibling tools like getTournaments (plural) and getMatch (singular match) by specifying it returns one tournament by a unique slug.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a specific tournament slug is known, but does not explicitly state when to prefer this over alternatives like getTournaments or getMatch. No guidance on when not to use it or context about prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_cricket_getTournamentsGet the cricket tournament listA
Read-only
Inspect

Get the cricket tournament list — T20 World Cup 2026 + tour series. Optionally filter to upcoming.

ParametersJSON Schema
NameRequiredDescriptionDefault
upcomingOnlyNoFilter to tournaments not yet started.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true; description adds the list nature and example tournaments but no additional behavioral details like auth requirements or edge cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

18 words, single sentence with dash separation. No filler; every word contributes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose, scope, and optional filter. Missing info on return format or default behavior, but adequate for a simple list tool with read-only annotation and no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Parameter 'upcomingOnly' is fully described in schema (100% coverage). Description adds 'upcoming' synonym but no new meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'Get', resource 'cricket tournament list', and provides concrete examples (T20 World Cup 2026 + tour series). Distinguishes from sibling getTournament (singular) by listing the set.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for listing tournaments with optional upcoming filter, but does not explicitly compare to sibling tools like whensport_cricket_getTournament (singular) or state when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_f1_getDriversGet the Formula 1 driver rosterA
Read-only
Inspect

Get the F1 driver list (name, team, number, country).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true. Description adds return fields but no further behavioral traits. Minimal but consistent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, clear sentence with essential information. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple no-parameter list tool, description fully covers purpose and output. Annotations and schema provide additional context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters; schema coverage 100%. Baseline 4 for zero-parameter tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Get' and resource 'F1 driver list' with explicit fields (name, team, number, country). Distinguishes from sibling tools like whensport_f1_getTeams.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or alternatives, but no parameters imply simple retrieval. Context makes usage obvious, but lacks explicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_f1_getNextRaceGet the next upcoming Formula 1 raceA
Read-only
Inspect

Get the next upcoming F1 race relative to today.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so description adds minimal behavioral context. The phrase 'relative to today' indicates time sensitivity, but no further disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, short sentence that is maximally efficient with no unnecessary words. Front-loaded with the key action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with zero parameters and no output schema, the description is fully complete—it tells the agent exactly what the tool does and what it returns (the next race).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters in input schema, so description does not need to explain them. The schema coverage is 100% (0 params), and the description adds value by clarifying the temporal nature.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the verb 'Get' and the resource 'next upcoming F1 race relative to today', distinguishing it from sibling tools that get a specific race or a list of races.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like whensport_f1_getRace or whensport_f1_getRaces. The context is implied by the name and siblings, but not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_f1_getRaceGet a single Formula 1 race by slug or roundA
Read-only
Inspect

Get a single F1 race by slug (e.g. 'miami', 'monaco', 'great-britain', 'abu-dhabi') or by round number. Slugs are country/host names — Silverstone's race is 'great-britain', not 'silverstone' (silverstone is the venueSlug). Cancelled races are also queryable: 'bahrain' and 'saudi-arabia' return status="cancelled" with cancellationReason set.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugNoRace slug — country/host name, bare, no year suffix (e.g. monaco, great-britain, abu-dhabi). Cancelled-race slugs (bahrain, saudi-arabia) also resolve.
roundNoRace round number 1-22 (rounds are renumbered after the 2026 cancellations; cancelled races have no current round but expose originalRound for reference).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, so the agent knows it is a read operation. The description adds valuable behavioral context: slug is country/host name not venueSlug (e.g., 'great-britain' not 'silverstone'), and cancelled-race slugs work and return cancellationReason. This goes beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured paragraph of about 80 words. It is front-loaded with the main purpose, followed by examples and caveats. Every sentence adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Having no output schema, the description does not detail the return format. However, given the readOnlyHint and simplicity of a single-race get operation, the agent can likely infer. The description covers key usage aspects (slugs, rounds, cancelled races) comprehensively. A small gap remains on response structure, but overall it is quite complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds significant meaning: for slug, it gives examples and distinguishes from venueSlug; for round, it explains round renumbering after 2026 and cancelled race's originalRound. This strongly enhances the agent's understanding beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get a single F1 race by slug or round,' specifying the resource and the two alternative keys. It distinguishes from the sibling 'whensport_f1_getRaces' (plural) and provides concrete examples of slugs (e.g., 'miami', 'great-britain').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly indicates how to use the tool (by slug or round) and offers guidance on cancelled races, including that they are queryable and return status='cancelled'. It implicitly differentiates from the plural sibling by indicating singular retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_f1_getRacesGet the Formula 1 race calendarA
Read-only
Inspect

Get the F1 race calendar — every grand prix with date, circuit, round, sprint flag, and local kick-off in IANA timezone. Cancelled races (e.g. Bahrain, Saudi Arabia) are included with status="cancelled" and a cancellationReason; their date/round fields are empty since the events did not take place. Use upcomingOnly to filter to forthcoming active races. Note: this MCP is schedule-focused; result (podium/winner) on finished races is populated as ingestion catches up — consumers should treat null as "not yet ingested" and consult fia.com / formula1.com for confirmed results.

ParametersJSON Schema
NameRequiredDescriptionDefault
upcomingOnlyNoIf true, return only races that have not yet happened. Cancelled races are excluded from this filter. Default false.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only indicate readOnlyHint=true. Description adds critical behavior: cancelled races included with status/cancellationReason, and result population lag. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the main purpose, no unnecessary words. Each sentence adds distinct value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read tool with one optional param and no output schema, the description fully explains behavior, edge cases (cancelled), and data quality notes. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (only one param `upcomingOnly`). Description adds extra meaning: cancelled races are excluded from the filter. Falls short of 5 because it repeats default behavior from schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb+resource: 'Get the F1 race calendar' with specific details (date, circuit, round, sprint flag, local kick-off). Distinguishes from siblings (getRace, getNextRace) by focusing on the full calendar.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly describes when to use `upcomingOnly` and notes that null results mean not yet ingested, advising to consult official sources. Lacks explicit when-not-to-use, but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_f1_getTeamsGet the Formula 1 constructor listA
Read-only
Inspect

Get the F1 team list (constructors).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description aligns with annotations (readOnlyHint=true) by indicating a read operation. It adds no further behavioral context beyond what annotations already convey, which is acceptable for a simple list retrieval.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no superfluous words. It is concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is minimal but adequate for a parameterless tool returning a list. It lacks details about the return structure (e.g., fields in the team objects), which could help the agent, but the lack of an output schema reduces the burden.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters and schema coverage is 100%. The baseline score for 0 parameters is 4, as no parameter information is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Get the F1 team list (constructors),' using a specific verb and resource. This clearly distinguishes it from sibling F1 tools like getDrivers, getRaces, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use vs alternatives is provided. However, given the tool has no parameters and is a simple data retrieval, the usage is implied and straightforward.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_findTodayFind FIFA World Cup matches happening today in a timezoneA
Read-only
Inspect

Find FIFA World Cup 2026 matches happening today in a given timezone. Returns matches whose local-time date matches today.

ParametersJSON Schema
NameRequiredDescriptionDefault
timezoneYesIANA timezone to evaluate 'today' in. Example: 'America/New_York'. Required.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds the behavioral detail that matches are returned based on their local-time date matching today, which goes beyond the readOnlyHint annotation. This clarifies the matching logic. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose. Each sentence is necessary and concise. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (one parameter, read-only, no output schema), the description is complete. It explains the input, output logic, and purpose adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description reinforces the timezone context but does not add new semantics beyond the schema's IANA description and example. Minimal added value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool finds FIFA World Cup matches happening today, specifying verb ('Find'), resource ('FIFA World Cup matches'), and scope ('today in a given timezone'). It distinguishes itself from sibling tools by focusing on the World Cup and today's date.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for today's matches but does not explicitly state when not to use it or suggest alternatives among sibling tools like whensport_getCountrySchedule or whensport_getCurrentlyLive. Usage context is clear but lacks exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_getCountryListList FIFA World Cup country slugsA
Read-only
Inspect

List the country slugs available for getCountrySchedule, with each country's display name and primary timezone.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true. Description adds context by specifying returned data (display name, timezone) beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loads purpose, no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple tool with no parameters; description adequately covers purpose and output. Could mention it returns a list, but implied.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters, so schema coverage is perfect. Description adds no parameter info, but none needed. Baseline for 0 params is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'list[s] the country slugs available for getCountrySchedule' with specific details (display name, primary timezone). Differentiates from sibling tools by linking to getCountrySchedule.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implicitly indicates usage: obtain slugs before calling getCountrySchedule. No explicit exclusions or alternatives, but sufficient for a simple list tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_getCountryScheduleGet the FIFA World Cup schedule for a countryA
Read-only
Inspect

Get the FIFA World Cup 2026 schedule (all 104 matches) converted to a country's primary local timezone. Returns matches sorted chronologically with team names, kick-off date+time in local zone, venue, and round.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax matches to return. Default 104 (full schedule). Use lower values for context-conscious calls.
countryYesCountry slug. Examples: 'usa', 'japan', 'brazil', 'argentina', 'england', 'scotland', 'germany', 'france', 'south-korea', 'india', 'czech-republic', 'turkey'. Use getCountryList for the full set.
timezoneNoOptional IANA timezone override (e.g. 'America/Los_Angeles'). If omitted, uses the country's primary timezone.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, and the description adds behavioral context: timezone conversion, chronological sorting, and return fields. It does not contradict annotations and provides useful details beyond the structured data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no extraneous words. Efficient and clear structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately describes the return (sorted matches with team names, date/time, venue, round). It also specifies the total match count (104). This is sufficient for an agent to understand the tool's output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by explaining timezone conversion and default limit context, which complements the parameter descriptions. However, it does not introduce new parameter semantics beyond what is in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the FIFA World Cup 2026 schedule for a country, localizes times, and returns sorted matches with details. It uses a specific verb and resource, and is well-distinguished from sibling tools like whensport_getMatch (single match) and whensport_getTeamMatches (team-focused).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for a country's schedule and mentions 'all 104 matches', but lacks explicit guidance on when to use this tool vs alternatives (e.g., whensport_getMatch or whensport_getTeamMatches). No 'when not to use' or direct sibling comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_getCurrentlyLiveGet currently-live FIFA World Cup matchesA
Read-only
Inspect

Return any FIFA World Cup matches currently in progress (kicked off in the last ~110 minutes, before final whistle).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and not open-world. The description adds meaningful behavioral context: the time window for 'in progress' and endpoint before final whistle, going beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with key information, no wasted words. Highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description sufficiently explains what the tool returns. It lacks detail on return structure but is adequate for initial selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters, schema coverage is 100%. The description adds full purpose but no parameter details are needed. Baseline is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The title and description clearly specify the verb 'Return' and the resource 'FIFA World Cup matches currently in progress', with a precise time window ('kicked off in the last ~110 minutes'), uniquely identifying the tool from siblings like whensport_getMatch.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for live matches but does not explicitly state when to use or not use this tool compared to alternatives, nor provide exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_getMatchGet a single FIFA World Cup match by numberA
Read-only
Inspect

Get details of a single FIFA World Cup match by its match number (1-104).

ParametersJSON Schema
NameRequiredDescriptionDefault
numberYesMatch number (1-104).
timezoneNoOptional IANA timezone for kick-off conversion. Defaults to UTC.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, so description adds limited value beyond confirming read-only behavior. No contradictions, but no additional behavioral details beyond the match number range.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, front-loaded sentence that is concise and contains no wasted words. Every word contributes to understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup tool with strong annotations and full schema, the description is mostly complete. Could mention return value format, but not essential given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description adds no new meaning beyond what parameters already convey. The number range is repeated, and timezone is mentioned in schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The title and description clearly state it gets a single FIFA World Cup match by match number, with range 1-104. It distinguishes from sibling tools like whensport_cricket_getMatch by specifying FIFA World Cup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives or when not to use it. While name implies FIFA World Cup context, lack of when/when-not instructions is a gap.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_getTeamMatchesGet a team's FIFA World Cup matchesA
Read-only
Inspect

Get a specific team's FIFA World Cup 2026 matches. Returns the team's group-stage fixtures and any knockout fixtures with kick-off times.

ParametersJSON Schema
NameRequiredDescriptionDefault
teamYesTeam slug. Examples: 'brazil', 'argentina', 'japan', 'united-states', 'england', 'scotland', 'germany', 'france'.
timezoneNoOptional IANA timezone for kick-off conversion (e.g. 'Europe/London'). Defaults to UTC.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds beyond annotations by specifying return includes group-stage and knockout fixtures with kick-off times, but lacks details on scope (e.g., all matches or only upcoming).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-loaded with purpose and return value, no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with no output schema, it adequately describes returned content, though could mention if all matches or only future ones.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Parameter descriptions in schema already cover semantics, and main description adds minimal extra value beyond examples for team slug.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'get' with specific resource 'team's FIFA World Cup 2026 matches' and details on what types of fixtures are included, which distinguishes it from other sport tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage for World Cup team matches but no explicit guidance on when to use this versus siblings like whensport_getMatch or whensport_getCountrySchedule.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_golf_getNextMajorGet the next upcoming golf MajorA
Read-only
Inspect

Get the next upcoming Major (Masters / PGA / US Open / The Open).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the description needs little behavioral context. It adds that the tool returns the next upcoming Major among the listed ones, which is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no wasted words. It efficiently conveys the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (no params, simple retrieval), the description is complete. It specifies what is returned and which events are considered Majors. No output schema exists, but the return is straightforward.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and schema description coverage is 100%. With no parameters, the baseline is 4, and the description does not need to add parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it gets the next upcoming golf Major and lists the four specific tournaments (Masters, PGA, US Open, The Open). This distinguishes it from sibling tools that get tournaments or lists.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives. It implies usage for the next major, but lacks exclusion criteria or guidance on when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_golf_getPlayersGet the golf player rosterA
Read-only
Inspect

Get the golf player roster covered by whensport.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the read-only nature is known. The description adds 'covered by whensport' but provides no additional behavioral traits like dataset scope or limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words, efficiently conveys purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Minimal description for a simple roster retrieval with no output schema. Could hint at return structure (e.g., player names or IDs) but adequate given annotations and no parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters, schema coverage 100%, so the description need not elaborate. Baseline 4 applies as no param info is missing.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The title and description clearly state the tool retrieves the golf player roster, a specific resource. It distinguishes from sibling tools like whensport_golf_getTournament.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus alternatives. The distinct sibling names imply usage, but no when-not-to or alternative mentions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_golf_getTournamentGet a single golf tournament by slugA
Read-only
Inspect

Get a single golf tournament by slug (e.g. 'the-masters', 'pga-championship', 'us-open', 'the-open' for Majors). Note: result/winner on finished tournaments may be null pending data backfill — consult primary sources for confirmed leaderboards.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations give readOnlyHint and openWorldHint. Description adds crucial caveat about data backfill causing null results, and advises consulting primary sources. This goes beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states purpose with examples, second adds essential caveat. No fluff, front-loaded, every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Low complexity tool with 1 param and no output schema. Description covers purpose, usage examples, and a data reliability caveat. Could mention return structure but acceptable given simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% coverage (no description for 'slug' property). Description adds concrete examples ('the-masters', etc.), providing semantic meaning beyond the raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb 'Get', resource 'single golf tournament', and identifier 'slug'. Differentiates from sibling whensport_golf_getTournaments by specifying singular retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides example slugs for majors and notes potential null results, guiding when to trust the data. Does not explicitly compare to siblings like whensport_golf_getTournaments, but context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_golf_getTournamentsGet the golf tournament listA
Read-only
Inspect

Get the golf tournament list — Masters, PGA Championship, US Open, Open Championship. Note: this MCP is schedule-focused; winners/leaderboards on finished tournaments may be null while ingestion catches up — consult masters.com / pgatour.com / usga.org / theopen.com for confirmed results.

ParametersJSON Schema
NameRequiredDescriptionDefault
upcomingOnlyNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, and the description adds critical behavioral context: data freshness issues for finished tournaments (winners may be null). No contradiction; description enhances transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no fluff. The first sentence states the core purpose, the second adds crucial caveats. Perfectly front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with one boolean parameter and no output schema, the description covers the main behavior (list of majors) and key limitation (stale data). Minor omission: doesn't specify output structure (e.g., tournament names, dates), but given the simple domain, this is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, yet the description adds no information about the sole parameter upcomingOnly. While the parameter is self-explanatory, the description fails to confirm its purpose or any constraints, leaving a gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Get the golf tournament list' and names the four major tournaments, clearly indicating what the tool retrieves. It distinguishes from siblings like whensport_golf_getTournament (singular) by focusing on the list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly notes this is schedule-focused and warns that winners/leaderboards may be null for finished tournaments, advising consultation of official sites for confirmed results. This provides clear when-to-use and when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_golf_getVenuesGet the golf venue listA
Read-only
Inspect

Get the golf venue list (courses, pars, locations).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and limited set. The description adds value by specifying the return content (courses, pars, locations), confirming safe read behavior without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no wasted words; front-loads the core purpose and output scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless list retrieval tool with annotations, the description covers what the tool returns. Lacks details on result size or format, but no output schema required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Since there are no parameters (schema coverage 100%), the description does not need to add parameter information. Baseline for zero parameters is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it retrieves the golf venue list and specifies included data (courses, pars, locations), clearly distinguishing it from sibling venue tools and other golf tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, such as other venue getters or golf-specific tools. The description does not mention context, prerequisites, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_hr_getMeetingGet a single horse racing meeting by slugA
Read-only
Inspect

Get a single horse racing meeting by slug (e.g. royal-ascot-2026, kentucky-derby-2026, melbourne-cup-2026). Returns keyDates with race times. Note: result/winner on completed meetings may be null pending data backfill — consult primary turf authorities for confirmed results.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the tool is known to be safe. The description adds important behavioral context: 'Note: result/winner on completed meetings may be null pending data backfill — consult primary turf authorities for confirmed results.' This warns about potential incompleteness, adding value beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences and examples with no wasted words. The critical behavior warning is included without verbosity. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description mentions returns: 'keyDates with race times' and the null result caveat. For a single-parameter get-by-slug tool, this is adequate, though it could list additional returned fields if present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description compensates by providing concrete examples of valid slugs (royal-ascot-2026, kentucky-derby-2026, melbourne-cup-2026). This adds significant semantic meaning to the otherwise minimally specified 'string' parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Get a single horse racing meeting by slug' and provides concrete examples (royal-ascot-2026). This clearly distinguishes it from siblings like whensport_hr_getMeetings (plural) and whensport_hr_getVenues.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus alternatives. While the name and examples imply it's for a specific meeting by slug versus listing all, the description does not state 'use getMeetings to list all meetings' or provide any exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_hr_getMeetingsGet the horse racing meeting calendarA
Read-only
Inspect

Get the horse racing meeting calendar (Cheltenham, Royal Ascot, Kentucky Derby, Melbourne Cup, etc.). Each meeting returns a top-level startDate/endDate derived from keyDates and a derived status. Note: this MCP is schedule-focused; winners/results on completed meetings may be null pending ingestion — consult bha.org.uk / racingpost.com / equibase.com for confirmed results.

ParametersJSON Schema
NameRequiredDescriptionDefault
upcomingOnlyNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, so the read-only nature is clear. The description adds that dates are derived from keyDates and status is derived, and warns that results may be null. This provides additional behavioral insight beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief (two sentences) and front-loaded with the primary purpose. The second sentence adds important caveats. It is well-structured and efficient, though it could have been slightly more concise by omitting examples.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input (one optional boolean), no output schema, and annotations covering readOnly, the description adequately covers purpose, behavioral traits, and important caveats. It is almost complete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one optional boolean parameter 'upcomingOnly' with no description. Schema description coverage is 0%, yet the description does not explain this parameter. Although the parameter name is self-explanatory, the lack of documentation in the description is a gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get the horse racing meeting calendar' with specific examples like Cheltenham, Royal Ascot. It also explains what each meeting returns (start/end dates, derived status), distinguishing it from the sibling tool 'getMeeting' which likely retrieves detailed single-meeting data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description advises that results on completed meetings may be null and directs users to external sources (bha.org.uk, racingpost.com, equibase.com) for confirmed results. This gives clear guidance on when not to rely on this tool, though it does not explicitly compare with the sibling 'getMeeting' tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_hr_getVenuesGet the horse racing venue listA
Read-only
Inspect

Get the horse racing venue list (racecourses). Surface (turf/dirt/all-weather) varies by card and is reported per-meeting, not per-venue.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and openWorldHint=false, so the description does not need to reiterate safety. It adds behavioral context about surface being reported per-meeting, not per-venue, which is valuable for correct usage. No other behavioral traits (e.g., pagination, limits) are disclosed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-load the core purpose and add a critical caveat. No superfluous words; every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters, readOnlyHint, and no output schema, the description adequately explains the tool's function and a key data nuance. However, it could be more complete by describing the output structure (e.g., venue IDs, names) since no output schema is provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, and schema description coverage is 100%. Per rubric, baseline is 4. The description adds extra meaning about surface behavior, which is not param-related but enriches understanding of the output.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves the horse racing venue list, specifying 'racecourses' as an alternative term. It distinguishes from sibling getVenues tools in other sports by explicitly naming the sport. The added detail about surface variation per-meeting provides additional clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like whensport_hr_getMeeting or whensport_hr_getMeetings. The description does not specify when not to use it or provide criteria for selecting among sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_polo_getPlayersGet the polo player rosterA
Read-only
Inspect

Get the polo player roster (handicaps, nationalities).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint true, and the description adds useful context about the data returned (handicaps, nationalities), enhancing transparency without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with no wasted words, front-loading the purpose and key details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and no output schema, the description fully informs the agent about what the tool returns, and the annotations cover safety. It is complete for the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so the description does not need to add parameter semantics. The baseline is appropriately set to 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the polo player roster including handicaps and nationalities, which distinguishes it from sibling tools for other sports.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives is provided, though the context of sibling tools implies it is for polo-specific player data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_polo_getTeamsGet the polo team roster for the U.S. Open Polo Championship (Bracket I + II)A
Read-only
Inspect

Get the polo team roster for the U.S. Open Polo Championship (Bracket I + II) — rosters, handicaps, win/loss records, scraped daily from uspolo.org. Coverage limited to the U.S. Open; Argentine Triple Crown and British Open team rosters are not yet exposed.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description notes data is scraped daily from uspolo.org, implying potential staleness. This adds behavioral context beyond the annotations readOnlyHint=true and openWorldHint=false. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the purpose and including essential details about scope and data freshness. Every word contributes value, with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no parameters and no output schema, the description fully explains what data is returned (rosters, handicaps, records) and its limitations (only U.S. Open). It provides all necessary context for an AI agent to decide when to invoke it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters, so the description correctly implies no input is required. Baseline for zero parameters is 4, and the description adds no parameter details as none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The title and description explicitly state the tool retrieves polo team rosters for the U.S. Open Polo Championship, including rosters, handicaps, and win/loss records. It distinguishes itself from sibling tools like whensport_polo_getPlayers and whensport_polo_getTournament by specifying the exact event and data scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clearly states coverage is limited to the U.S. Open and that Argentine Triple Crown and British Open rosters are not exposed, telling when not to use this tool. However, it does not explicitly state when to use it or mention alternative tools for other tournaments.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_polo_getTournamentGet a single polo tournament by slugA
Read-only
Inspect

Get a single polo tournament by slug (e.g. dubai-polo-gold-cup, uspa-gold-cup, argentine-open). Returns rounds, format, handicap, venue. Note: result on completed tournaments may be null pending data backfill.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only. The description adds return fields (rounds, format, handicap, venue) and a note about potential null for completed tournaments, providing behavioral context beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with purpose and examples, and no redundant information. Every sentence adds value: purpose, examples, return info, and a caveat.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read tool with one parameter and no output schema, the description sufficiently covers what the tool does and returns. The note about null values adds context, but it could mention if there are any rate limits or data freshness details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description compensates by explaining the 'slug' parameter with examples. This clarifies the parameter's meaning and expected format, though more formal description in schema would improve.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action: 'Get a single polo tournament by slug', with specific examples like 'dubai-polo-gold-cup'. This distinguishes it from the sibling tool 'getTournaments' for multiple tournaments, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you have a specific slug, supported by examples. It does not explicitly mention when to use alternatives like 'getTournaments', but the examples and singular focus provide clear context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_polo_getTournamentsGet the polo tournament listA
Read-only
Inspect

Get the polo tournament list (USPA / Argentine Triple Crown / British Open). Each tournament returns a top-level startDate derived from the earliest round date. Note: this MCP is schedule-focused; champion/result on completed tournaments may be null pending ingestion — consult uspolo.org / aapolo.com / hpa-polo.co.uk for confirmed winners.

ParametersJSON Schema
NameRequiredDescriptionDefault
upcomingOnlyNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true; description adds schedule-focused behavior and potential null results for completed tournaments, which is beyond annotation info.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with front-loaded purpose and a necessary caveat. No redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose and behavioral nuance, but lacks explanation of parameter and return value structure. No output schema, so description should hint at output being a list of tournaments.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% and description does not mention the 'upcomingOnly' parameter, leaving its meaning implicit. Parameter name is self-explanatory but description could have clarified its effect.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states verb 'Get' and resource 'polo tournament list', specifying leagues (USPA, Triple Crown, British Open). Distinguishes from sibling tools like getTournament and other sports.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly notes that the tool is schedule-focused and results may be null, advising to consult external sources for confirmed winners. This guides when to use vs alternative data sources.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_polo_getVenuesGet the polo venue listC
Read-only
Inspect

Get the polo venue list.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only (readOnlyHint=true). Description adds no extra behavioral context such as auth requirements, rate limits, or response characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is concise but lacks informative content. It earns its place minimally but could include context about the returned list.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has no parameters, no output schema, and minimal description. It does not explain what the venue list contains (IDs, names, locations) nor any filtering capabilities.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, and schema coverage is 100%. Per guidelines, baseline is 4; description adds no further parameter details but none are needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it gets the polo venue list, specifying both verb and resource. It distinguishes from other sports' venue tools by including 'polo'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like other getVenues tools or related polo tools. Users must infer from the name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_rugby_getMatchGet a single rugby match by codeA
Read-only
Inspect

Get a single rugby match by its match code (e.g. "6n-1" for Six Nations match 1, "rc-3" for Rugby Championship match 3). The code is in the match field of getMatches output.

ParametersJSON Schema
NameRequiredDescriptionDefault
matchCodeYesMatch identifier — value of the `match` field in getMatches output (e.g. 6n-1, rc-3).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so safety is clear. Description adds no further behavioral details (e.g., pagination, rate limits), but does not contradict annotations. For a simple read tool, this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences front-loaded with purpose. No extraneous information; every word serves a clear function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read tool with annotations, the description covers the essential workflow. Minor omission: no mention of case sensitivity or format constraints, but not critical for usability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline 3. Description adds value by providing concrete examples (6n-1, rc-3) and clarifying that the code comes from getMatches output, going beyond the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get a single rugby match by its match code' with specific examples (6n-1, rc-3). Distinguishes from sibling whensport_rugby_getMatches (plural) by indicating single match retrieval, and sport-specific prefix differentiates from generic whensport_getMatch.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides instruction to use the code from getMatches output, implying a workflow. Lacks explicit when-not or comparison to alternatives, but the hint is sufficient context for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_rugby_getMatchesGet all tracked rugby matchesA
Read-only
Inspect

Get rugby matches across tracked tournaments (Six Nations, Top 14, Premiership, URC, Super Rugby Pacific, Rugby Championship, Nations Championship, Pacific Four, etc.). Filtering is strongly recommended — the unfiltered match set is large. Default limit is 50 (max 200); when more matches exist the response includes truncated: true, moreCount, and nextOffset. Page through results by passing the response's nextOffset back as offset. Filter by tournament slug or by team (3-letter code or team slug) to narrow results.

ParametersJSON Schema
NameRequiredDescriptionDefault
teamNoTeam identifier — either a 3-letter code (FRA, IRE, RSA, NZL) or a team slug (france, ireland, south-africa, new-zealand). Matched case-insensitively against team1/team2.
limitNoMaximum matches to return. Default 50, max 200.
offsetNoSkip the first N matches; use the response's `nextOffset` for paging.
tournamentNoTournament slug e.g. "six-nations-2026", "top-14-2025-26", "urc-2025-26".
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description goes beyond readOnlyHint=true annotation by detailing pagination behavior: response includes truncated, moreCount, nextOffset. It explains how to page through results, which is critical for agent usage. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph with clear sentences. It is front-loaded with purpose, then filters, then pagination. While efficient, it could be slightly more structured with bullet points. Still, every sentence is valuable and no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 4 optional parameters and no output schema, the description compensates by explaining response fields (truncated, moreCount, nextOffset) and pagination. It covers usage scenarios effectively and is complete for the agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although schema coverage is 100%, the description adds significant meaning: for 'team' it explains formats (3-letter code or slug) and case-insensitivity; for 'limit' it restates default and max; for 'offset' it ties to nextOffset; for 'tournament' it gives example slugs. This enhances agent understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it retrieves rugby matches across tracked tournaments, listing multiple specific tournaments. The verb 'Get' and resource 'rugby matches' are clear, and it distinguishes from siblings like getMatch (single match) and getTeamMatches (team-specific).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly recommends filtering ('Filtering is strongly recommended') and provides guidance on when to use filters to avoid large result sets. It explains pagination with limit, offset, and nextOffset. No explicit when-not-to-use, but the context is clear for a listing tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_rugby_getTeamswhensport_rugby_getTeamsA
Read-only
Inspect

Get the rugby teams in scope, optionally scoped to a tournament. Each team has a 3-letter code (e.g. "FRA", "RSA", "NZL"), full name, and the tournaments it competes in. Use this to enumerate valid team values for rugby_getMatches.

ParametersJSON Schema
NameRequiredDescriptionDefault
tournamentNoOptional tournament slug to scope the team list (e.g. "six-nations-2026", "top-14-2025-26"). When omitted, returns every team that appears in any tracked match.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true and openWorldHint=false. Description adds useful behavioral context about return structure (3-letter code, full name, tournaments) and scoping behavior without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, followed by structural details and usage guidance. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple tool with one optional param, good annotations, and no output schema, the description provides sufficient information about inputs, outputs, and usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% with clear description for the single parameter. Description enhances by providing concrete examples (e.g., 'six-nations-2026') and clarifying the effect of omission.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Get' with resource 'rugby teams', optionally scoped to a tournament. Distinguishes from siblings like rugby_getMatch and rugby_getTournaments by focusing solely on teams.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states use case: 'Use this to enumerate valid team values for rugby_getMatches.' Also clarifies optional scoping to a tournament for targeted queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_rugby_getTournamentGet a single rugby tournament by slugA
Read-only
Inspect

Get a single rugby tournament by slug (e.g. six-nations-2026, top-14-2025-26, urc-2025-26, nations-championship-2026). Returns the full schedule when present (Six Nations, Nations Championship); other tournaments use rugby_getMatches?tournament=… instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations mark read-only; description adds behavioral detail about schedule inclusion conditionally. No contradiction. Could mention response format but adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers key behavioral nuance (schedule inclusion) and redirect to sibling. Lacks explicit return structure, but given no output schema and low complexity, it's sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Single parameter slug with examples of valid values. No additional schema description, but examples add practical value beyond type constraint.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Get' and resource 'single rugby tournament by slug', with specific examples. It distinguishes from sibling tool rugby_getMatches, achieving high clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use this tool vs rugby_getMatches: returns full schedule for some tournaments, others need rugby_getMatches. Provides example slugs for guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_rugby_getTournamentsGet the rugby tournament listA
Read-only
Inspect

Get rugby tournaments tracked by whensport (Six Nations, Rugby Championship, World Cup, Top 14, Premiership, URC, etc.). Returns lightweight metadata (name, dates, format, team count) — call rugby_getTournament(slug) for the full schedule.

ParametersJSON Schema
NameRequiredDescriptionDefault
upcomingOnlyNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true; description adds that it returns lightweight metadata (name, dates, format, team count). No contradictions, adds context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, concise, front-loaded with purpose and examples. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a list tool with no output schema, description covers purpose, data returned, and links to detailed sibling. Lacks mention of pagination or ordering, but adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage 0% for 'upcomingOnly' parameter, and description does not mention this parameter at all. Description fails to compensate for missing schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it gets rugby tournaments with examples, and distinguishes from sibling tool rugby_getTournament for full schedule.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says to use rugby_getTournament for full schedule, implying this is for lightweight metadata. No explicit when-not-to-use, but sufficient guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_sailing_getEventGet a single sailing event by slugA
Read-only
Inspect

Get a single sailing event by slug (e.g. 'sailgp-perth', 'americas-cup-cagliari', 'tp52-puerto-portals'). Slugs are series-prefixed and bare (no year suffix). Result coverage rule applies: SailGP rounds backfilled, multi-class regattas may be null until prize-giving publishes.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds value beyond readOnlyHint by disclosing result coverage rules (backfilling, null until prize-giving).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences, front-loaded with action and purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers parameter, caveats, and data quality for a simple get-by-slug tool with no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

0% schema coverage; description fully compensates by explaining slug format (series-prefixed, no year suffix) and examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get a single sailing event by slug', with specific examples that distinguish it from sibling 'whensport_sailing_getEvents'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides guidance on slug format and context, but does not explicitly state when to use getEvents instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_sailing_getEventsGet the sailing/yachting event calendarA
Read-only
Inspect

Get the sailing/yachting event calendar — America's Cup, SailGP, Vendée Globe, etc. Result coverage rule: SailGP rounds and headline regattas with same-day broadcast results are backfilled when complete. Multi-class regattas (Antigua Sailing Week, Cowes Week, Copa del Rey, etc.) may have null result until the prize-giving publishes to a canonical source — consult sailgp.com / sailingweek.com / cowesweek.co.uk / americascup.com directly for those.

ParametersJSON Schema
NameRequiredDescriptionDefault
upcomingOnlyNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses detailed behavioral traits beyond annotations, including backfilling rules for SailGP and the possibility of null results for multi-class regattas, advising external consultation. It aligns with readOnlyHint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences), front-loaded with purpose, and efficient in conveying key behavioral rules. However, it could be better structured with a brief parameter note.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers tool purpose and behavioral caveats but omits parameter documentation and return format. Given no output schema, the description is incomplete for a fully informed agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description fails to mention or explain the only parameter 'upcomingOnly' (a boolean). With 0% schema description coverage, this leaves the agent without guidance on how to use the parameter, relying solely on the name.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool gets a sailing/yachting event calendar, lists specific events (America's Cup, SailGP, Vendée Globe), and distinguishes from the sibling 'whensport_sailing_getEvent' by implying it returns a calendar (list) rather than a single event.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implicit usage guidance by noting coverage rules and when results may be null, but does not explicitly state when to use this tool versus alternatives like 'whensport_sailing_getEvent'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_sailing_getTeamsGet the sailing competitor list — SailGP teams, America's Cup syndicates, and offshore sailors (IMOCA)A
Read-only
Inspect

Get the sailing competitor list — SailGP teams, America's Cup syndicates, and notable individual offshore sailors (IMOCA / Vendée Globe class). Each entry has a type field with value 'team' or 'individual' for explicit filtering.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=false. The description adds that entries have a 'type' field for filtering, which is helpful but does not disclose additional behavioral traits like return format, limits, or pagination. For a simple list tool, this is adequate but not exceptional.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences that front-load the purpose and provide essential detail. No unnecessary words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a no-parameter, no-output-schema tool, the description covers what the tool returns and a key attribute (type field). It could briefly mention the default output format (JSON) but is otherwise complete for typical use cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so schema documentation is complete. The description adds value by mentioning the 'type' field in results, which aids filtering. Baseline is 3 due to high schema coverage, but the added context about result structure justifies a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a sailing competitor list covering specific leagues (SailGP, America's Cup, IMOCA). It uses specific verbs ('Get') and resources ('competitor list'), distinguishing it from sibling tools like whensport_sailing_getEvent or whensport_sailing_getVenues.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for retrieving sailing teams/individuals but does not explicitly state when to use it over alternatives. Given the sibling tools are for events, venues, etc., the scope is clear, but explicit usage guidance (e.g., 'use this for lists of teams, not for event details') is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_sailing_getVenuesGet the sailing venue listB
Read-only
Inspect

Get the sailing venue list.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true, consistent with 'Get'. The description does not add any behavioral context beyond that, but does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence. For a zero-parameter tool, this is adequate, though slightly more detail could be added without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the basic purpose but does not mention what the venue list contains (e.g., names, locations). With no output schema, additional context about return format would be helpful for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so the schema covers everything. The description adds no parameter info, but baseline 4 is appropriate since no parameters exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'sailing venue list', but does not differentiate from sibling getVenues tools for other sports, which all have similar names. A bit more specificity would help.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus alternatives like whensport_sailing_getEvents or other sport-specific getVenues. The agent is left to infer from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_tdf_getRacePhaseGet the current Tour de France race phaseA
Read-only
Inspect

Get current Tour de France race phase. Returns one of: 'pre' (before the first stage, with nextStage + daysUntil), 'live' (a stage is currently running, with currentStage + elapsedPct), 'rest-day' (rest day between stages, with restDay + nextStage), 'between-stages' (off-day between consecutive stages, with lastStage + nextStage), 'finished' (race over, with finalStage).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=false. The description adds value by detailing the possible return values and their associated fields (e.g., nextStage, elapsedPct), which helps the agent understand the behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the purpose and efficiently lists all possible return values, with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters, no output schema, and no nested objects, the description is fully complete. It covers all possible states and their associated fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters in the input schema (schema description coverage 100%). The description compensates by fully explaining the return values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Get' and resource 'Tour de France race phase'. It clearly states the output values and differentiates from sibling tools by focusing on the overall race phase rather than specific stages.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives. It implies usage for getting overall race phase, but lacks direct comparison to sibling tools like whensport_tdf_getStage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_tdf_getStageGet a single Tour de France stage by numberA
Read-only
Inspect

Get a single stage by stage number (1-21).

ParametersJSON Schema
NameRequiredDescriptionDefault
numberYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, so description doesn't need to add safety info. Description doesn't add further behavioral traits, but the tool is simple and annotations suffice.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 12 words, front-loaded with key information, no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with one parameter and no output schema, the description is complete: what it does, what to pass, valid range.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Description adds 'stage number' and range (1-21) beyond schema's min/max constraints. With 0% schema coverage, this compensates well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Get a single stage by stage number (1-21)', which specifies verb, resource, and identifier. Distinguishes from sibling tools like getStages and getStagesInRange.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies use when needing a specific stage by number, but no explicit when-not or alternative tool guidance given the many sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_tdf_getStagesGet the Tour de France stage listA
Read-only
Inspect

Get the Tour de France stage list (21 stages — flat, hilly, mountain, ITT). Optionally filter to upcoming.

ParametersJSON Schema
NameRequiredDescriptionDefault
upcomingOnlyNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=false. The description adds that it returns 21 stages of specific terrain types and optional filtering, which is useful but does not disclose ordering, pagination, or other behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence, 19 words, front-loaded with purpose and efficient. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple parameters and no output schema, the description covers the main functionality well. However, it omits details about the return format (e.g., a list of stage objects), which would be helpful for a complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has zero description coverage, so the description adds meaning by explaining the 'upcomingOnly' parameter as 'Optionally filter to upcoming,' compensating for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get the Tour de France stage list' with verb and resource. Mentions 21 stages and types, but does not explicitly differentiate from sibling tools like whensport_tdf_getStage or whensport_tdf_getStagesInRange.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Only mentions optional filtering to upcoming, but provides no guidance on when to use this tool vs alternatives such as whensport_tdf_getStage for a single stage or whensport_tdf_getStagesInRange for a custom range.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_tdf_getStagesInRangeGet Tour de France stages between two datesA
Read-only
Inspect

Get stages between two dates (inclusive).

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesYYYY-MM-DD
fromYesYYYY-MM-DD
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, so the description's addition of 'inclusive' is minor. No disclosure of other behavioral traits like pagination, ordering, or limits. The description does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (one short sentence) and front-loaded. Every word serves a purpose with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple date-range retrieval tool with no output schema, the description is complete enough. It clearly defines the operation and scope, and the schema covers all parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for both parameters. The description adds the 'inclusive' context but does not provide additional meaning beyond what the schema already offers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get'), the resource ('stages'), and the scope ('between two dates inclusive'). It effectively differentiates from siblings like whensport_tdf_getStage (single stage) and whensport_tdf_getStages (likely all stages) by specifying the date range.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing stages in a date range but provides no explicit guidance on when to use this tool versus alternatives (e.g., whensport_tdf_getStages or whensport_tdf_getStage). There are no when-not-to-use or alternative hints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_tennis_getNextGrandSlamGet the next upcoming tennis Grand SlamA
Read-only
Inspect

Get the next upcoming Grand Slam (Australian Open / French Open / Wimbledon / US Open).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the description adds no new behavioral information. The tool is consistent with the read-only annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single clear sentence that front-loads the core purpose. Every word is necessary and there is no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool with no output schema, the description is adequate. It covers the scope (next Grand Slam) and lists the included events. However, it does not hint at the return format (e.g., date, location), which might leave an agent guessing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters, so schema coverage is 100%. The description does not need to add parameter detail, and it correctly omits redundant information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (get), the resource (next upcoming Grand Slam), and lists the four Grand Slams. This distinguishes it from sibling tools like whensport_tennis_getTournaments.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates it should be used for the next upcoming Grand Slam, but it does not explicitly state when to use this tool versus alternatives like getTournaments or when to avoid it (e.g., if a specific past tournament is needed).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_tennis_getPlayersGet the tennis player rosterA
Read-only
Inspect

Get the tennis player roster covered by whensport (top ATP/WTA players). Singles tennis has no team concept — players compete as individuals — so this tool fills the role that getTeams plays in team sports. Each player record includes nationality, ranking, and Grand Slam wins where known.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and openWorldHint=false. The description adds behavioral traits beyond annotations by specifying that each player record includes nationality, ranking, and Grand Slam wins where known.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with purpose, then differentiator, then field details. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description explains what data is included. Minor gap: no mention of data freshness or sorting, but still complete for a simple roster tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters so no parameter details are needed. The description provides context that it covers top ATP/WTA players, adding meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get the tennis player roster covered by whensport (top ATP/WTP players)' and distinguishes from siblings by explaining that singles tennis has no team concept, so this tool fills the role of getTeams.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for top tennis players and contrasts with team sports, but does not explicitly state when to use this vs other getPlayers tools or exclude scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_tennis_getTournamentGet a single tennis tournament by slugA
Read-only
Inspect

Get a single tennis tournament by slug — bare names, no year suffix (e.g. 'australian-open', 'roland-garros', 'wimbledon', 'us-open').

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesTournament slug.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds value beyond annotations by specifying the exact slug format expected. Annotations already indicate read-only and closed world. The description does not cover error behavior, but overall provides useful behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and includes essential formatting guidance. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input (one parameter, no output schema), the description is nearly complete. It could optionally mention what happens on invalid slugs, but the current level is adequate for a straightforward lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema's slug parameter is described generically. The description enriches it by detailing the format ('bare names, no year suffix') and providing concrete examples, which is highly valuable for correct invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a single tennis tournament by slug, with explicit examples to disambiguate the slug format (e.g., 'australian-open' without year). It effectively distinguishes from the sibling 'getTournaments' (plural).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a key usage guideline: slugs must be 'bare names, no year suffix'. This helps the agent avoid common errors. While it does not explicitly state when to use this vs. alternatives, the context (slugs vs. list endpoints) is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_tennis_getTournamentsGet the tennis tournament listA
Read-only
Inspect

Get the tennis Grand Slam calendar — Australian Open, Roland-Garros, Wimbledon, US Open. Tour-level events (Masters 1000, ATP 500, WTA) are not yet included. Optionally filter to upcoming only.

ParametersJSON Schema
NameRequiredDescriptionDefault
upcomingOnlyNoIf true, return only tournaments that have not yet started.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, and the description further specifies the exact set of tournaments returned, adding behavioral context beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that front-load the core purpose and constraints, with no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with one parameter and no output schema, the description adequately covers the return content and filtering option, leaving no major gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single parameter having a clear description and default. The description briefly mentions the optional filter but adds no new semantic detail beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool returns the Grand Slam calendar (Australian Open, Roland-Garros, Wimbledon, US Open) and clarifies that other tour-level events are not included, clearly distinguishing it from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates when to use the tool (to get Grand Slam tournaments) and mentions an optional filter for upcoming only, though it does not explicitly state when not to use alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_tennis_getVenuesGet the tennis venue listA
Read-only
Inspect

Get the tennis venue list (cities, courts, surfaces).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the description does not need to restate safety. The description adds that the tool returns cities, courts, surfaces, which provides useful detail beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, only one sentence with no fluff. It is front-loaded with the action and resource, and efficiently lists the key attributes in parentheses.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list retrieval with no parameters and no output schema, the description provides sufficient context: it lists what the venue info includes (cities, courts, surfaces). It does not mention venue names or IDs, but given typical usage, this is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters; schema coverage is 100% (empty). Per rules, 0 params baseline is 4. The description adds no parameter info, which is acceptable since there are none.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the verb 'Get' and the resource 'tennis venue list', with details in parentheses (cities, courts, surfaces). This distinguishes it from sibling getVenues tools by including the sport, making purpose unmistakable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance is provided. However, given zero parameters and a clear name, the use case is obvious. No alternatives are mentioned, but the context of sibling tools is sufficient for an agent to infer.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources