Skip to main content
Glama

cricket-2026

Server Details

Cricket 2026 MCP — T20 World Cup + ICC tour matches. Tournaments, fixtures, players, venues.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 6 of 6 tools scored. Lowest: 3.6/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct entity (match, matches, players, teams, tournament, tournaments) with no overlap in purpose.

Naming Consistency5/5

All tools follow a consistent 'get' verb followed by noun in camelCase, with singular/plural differentiation where appropriate.

Tool Count5/5

Six tools cover the core cricket entities (matches, teams, players, tournaments) without being too few or excessive.

Completeness4/5

Covers the main retrieval operations for the cricket domain; missing are player details and match scorecards but consistent with the schedule-focused purpose.

Available Tools

6 tools
whensport_cricket_getMatchGet a single cricket match by codeA
Read-only
Inspect

Get a single cricket match by match code (e.g. "t20wc-1" for T20 World Cup match 1, "ipl-2026-1" for IPL match 1). Code is in the match / matchCode field of getMatches output. Note: this MCP is schedule-focused; score/result on completed matches may be null pending ingestion — consult espncricinfo.com for confirmed scorecards.

ParametersJSON Schema
NameRequiredDescriptionDefault
matchCodeYesMatch identifier — value of the `match` field in getMatches output (e.g. t20wc-1).
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already note readOnlyHint=true (safe read operation). Description adds that the tool is 'schedule-focused' and that scores may be null pending ingestion, going beyond annotations to set expectations about data completeness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, each sentence essential. No wasted words; examples in parentheses are efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 1-param tool with no output schema, description covers purpose, usage, data limitations, and source of the parameter. No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (matchCode described). Description adds value by explaining code format with examples and specifying the source field in getMatches output, enriching semantics beyond the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the purpose ('get a single cricket match') and the verb-resource pairing is explicit. It distinguishes from sibling tools (getMatches returns a list; only this tool gets a single match by code). Examples reinforce understanding.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use (to retrieve a match by code from getMatches output) and when not to rely on it for confirmed scores (referring to espncricinfo.com). Includes examples of code patterns, making usage clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_cricket_getMatchesGet cricket matches, optionally filtered by tournament or teamA
Read-only
Inspect

Get cricket matches. Returns date, venue, format, status. Filtering is strongly recommended — the unfiltered match set is large. Default limit is 50 (max 200); when more matches exist the response includes truncated: true, moreCount, and nextOffset. Page through results by passing the response's nextOffset back as offset on the next call.

ParametersJSON Schema
NameRequiredDescriptionDefault
teamNoTeam abbreviation e.g. 'IND', 'AUS', 'ENG'.
limitNoMaximum matches to return. Default 50, max 200.
offsetNoSkip the first N matches; use the response's `nextOffset` for paging.
tournamentNoTournament slug e.g. 't20-world-cup-2026', 'ipl-2026'. Strongly recommended.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true. Description adds detailed pagination behavior (truncated, moreCount, nextOffset) which is critical for correct invocation. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, front-loaded with key info. Efficient but not extremely concise; could be slightly shortened without loss.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately covers return fields, pagination, and filtering recommendations. Covers all needed aspects for correct usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions. Description adds context on why filtering is recommended and how offset interacts with response fields, adding meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get cricket matches' with specific return fields (date, venue, format, status). Differentiates from sibling tools which target single matches, players, teams, or tournaments.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Strongly recommends filtering to avoid large sets, but does not explicitly contrast with sibling tools like getMatch for retrieving a single match. Implicit usage guidance is present but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_cricket_getPlayersGet the cricket player rosterA
Read-only
Inspect

Get the curated marquee cricket player roster covered by whensport. Not a comprehensive roster — focused on high-profile players for triggering / disambiguation.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and openWorldHint=false. Description adds critical behavioral context: the roster is curated and not comprehensive, which limits the scope of results. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that are front-loaded and waste no words. Every phrase adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and no output schema, the description adequately explains the tool's purpose and scope. It could mention what the roster looks like (e.g., list of player names) but is sufficient for a list retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so the input schema fully covers the interface. The description does not need to add parameter details. Baseline for 0 params is 4, and the description is sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states 'Get the curated marquee cricket player roster', with a clear verb and specific resource. It differentiates itself from a comprehensive roster and implies distinctness from sibling tools focused on matches, teams, and tournaments.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The note about being 'focused on high-profile players for triggering / disambiguation' gives context, but no explicit comparison to alternatives like using a different tool for a comprehensive list. Sibling names differentiate by entity type, so implicit guidance exists.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_cricket_getTeamswhensport_cricket_getTeamsA
Read-only
Inspect

Get the cricket teams in scope, optionally scoped to a tournament. Each team has a code (e.g. "IND", "AUS", "NZ", "WI", "SA") and full name. Use this to enumerate valid team values for cricket_getMatches — codes vary by sport (cricket uses 2-3 letter ISO-style codes; rugby uses different forms).

ParametersJSON Schema
NameRequiredDescriptionDefault
tournamentNoOptional tournament slug to scope the team list (e.g. "t20-world-cup-2026", "ipl-2026"). When omitted, returns every team that appears in any tracked match.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=false. Description adds that teams have code and full name, and that scoping is optional. Adds some behavioral context but no deeper insights beyond what annotations imply.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with main purpose, no redundant information. Every sentence adds value: purpose, details, usage guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (one optional parameter, no output schema, simple annotations), the description sufficiently explains what the tool returns and how to use it. Could mention that the result is a list or note about enumeration but not necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter. Description adds examples of tournament slugs (t20-world-cup-2026, ipl-2026) and explains that omission returns all teams, adding meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'Get', resource 'cricket teams', and optional scoping to tournament. Distinguishes from siblings by noting that team codes vary by sport and are used for cricket_getMatches, providing examples (IND, AUS).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says to use this tool to enumerate valid team values for cricket_getMatches. Provides context on codes varying by sport. However, does not explicitly state when not to use it or mention alternatives like getPlayers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_cricket_getTournamentGet a single cricket tournament by slugA
Read-only
Inspect

Get a single cricket tournament by slug (e.g. 't20-world-cup-2026').

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so no contradiction. The description adds no behavioral traits beyond what is already implicit from the name and annotations; e.g., no mention of error handling or data completeness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the action and parameter. No extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-resource retrieval with one parameter and readOnly annotations, the description covers the essential information. It could mention the return format or error cases, but not necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description provides an example value ('t20-world-cup-2026') which adds meaning beyond the raw schema. However, it does not describe the format or constraints of the slug parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get a single cricket tournament by slug' and provides an example, which is a specific verb+resource pattern that distinguishes it from sibling tools like getTournaments (plural).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like getTournaments (for listing). The description does not mention context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whensport_cricket_getTournamentsGet the cricket tournament listA
Read-only
Inspect

Get the cricket tournament list — T20 World Cup 2026 + tour series. Optionally filter to upcoming.

ParametersJSON Schema
NameRequiredDescriptionDefault
upcomingOnlyNoFilter to tournaments not yet started.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, so the safety profile is clear. The description adds the filtering behavior ('Optionally filter to upcoming') but does not disclose return format, ordering, or any rate limits. With annotations present, the added value is moderate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, front-loaded sentence with no extraneous words. Every part is informative, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 optional parameter, no output schema, read-only annotation), the description covers the essential purpose and filtering option. It could be improved by noting the typical fields returned (e.g., tournament names, dates) but is adequate for the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for the only parameter. The description's mention of optional filtering aligns with the schema but does not add substantive new meaning beyond restating it. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get'), the resource ('cricket tournament list'), and provides concrete examples ('T20 World Cup 2026 + tour series'). It also mentions the optional filter, distinguishing it from sibling tool 'getTournament' which likely returns a single entity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving tournament lists, optionally filtered to upcoming events. However, it lacks explicit guidance on when to use this versus alternative sibling tools (e.g., getMatch, getMatches) or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources