Skip to main content
Glama

SportScore

Server Details

Live scores, standings, top scorers, brackets, and player stats for football, basketball, cricket, and tennis. Free, no key.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.7/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct data type: bracket, match detail, match list, player, standings, team schedule, top scorers, and live tracker. No overlapping functionality.

Naming Consistency5/5

All tool names follow a consistent `get_<entity>` pattern (e.g., `get_bracket`, `get_matches`, `get_player`). Perfectly uniform.

Tool Count4/5

8 tools is a reasonable count for a sports data server, covering major data access needs without being excessive. Slightly on the lower end but well-scoped.

Completeness4/5

Covers matches, standings, teams, players, and top scorers. Minor gaps: no search functionality and no direct league listing, but core sports data needs are well addressed.

Available Tools

8 tools
get_bracketBInspect

Get the knockout bracket for a tournament (e.g. 'uefa-champions-league', 'nba-playoffs').

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesCompetition slug.
sportYesSport to query. One of football, basketball, cricket, tennis.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It states what the tool returns (knockout bracket) and gives examples, which is adequate but does not disclose any side effects, data freshness, or required permissions. Assuming it is read-only is reasonable but not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the purpose and provides examples. No extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description should clarify what the response contains (e.g., match pairs, rounds). It does not, leaving the agent to guess the return structure. Given the tool's moderate complexity, this is a significant gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents both parameters. The description adds context by providing example slug values ('uefa-champions-league', 'nba-playoffs'), which helps agents understand the format. However, it does not explain the relationship between slug and sport parameters beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'knockout bracket for a tournament', with concrete examples of values for the slug parameter. It distinguishes from siblings like get_matches or get_standings by focusing on bracket structure.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (to get a knockout bracket), but provides no guidance on alternatives or when not to use it. Sibling tools like get_matches and get_standings exist, but no comparison is made.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_match_detailAInspect

Get detailed data for a single match by its slug (e.g. 'manchester-united-vs-liverpool'): score, status, timeline, lineups. Slugs come from get_matches results or match URLs on sportscore.com.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesMatch slug, e.g. 'manchester-united-vs-liverpool'.
sportYesSport to query. One of football, basketball, cricket, tennis.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It clearly states the tool returns detailed match data and examples of what that includes. It does not mention side effects, authentication, or rate limits, but given its read-only nature and the provided examples, this is acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that packs essential information: purpose, data details, and source of slugs. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only 2 parameters with full schema coverage and no output schema, the description is complete enough for a data retrieval tool. It explains the key details needed for the agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with schema already providing clear descriptions for both slug and sport parameters. The description does not add additional meaning beyond what the schema provides, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies a clear verb ('Get') and resource ('detailed data for a single match'), with concrete examples of data included (score, status, timeline, lineups). It distinguishes itself from siblings like get_matches (which likely returns list of matches) by focusing on a single match identified by slug.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states that slugs come from get_matches results or match URLs, providing clear context on how to obtain the required slug. It does not explicitly state when not to use it, but the specificity of retrieving detailed data for a single match implies its usage when detailed match information is needed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_matchesAInspect

List live and recent matches for a sport. Returns up to limit matches with scores, status, kickoff time and team logos. Good default for 'what's happening in football right now?'.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
sportYesSport to query. One of football, basketball, cricket, tennis.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description notes it returns scores, status, kickoff time, and logos, but does not mention pagination behavior or that it only returns a limited set from recent matches. Without annotations, more detail on data recency or ordering would help.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, no filler. Front-loaded with action and outcome, including a practical usage example.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple listing tool with 2 parameters and no output schema, the description is adequate but lacks explicit mention of default ordering or which matches are considered 'recent'. It does not clarify if live matches are included in the limit count.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description explains `limit` controls how many matches returned, and `sport` is described in the schema. With 50% schema coverage, the description adds value by stating the output fields, but does not add deeper semantics for `sport` beyond the schema's enum.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists live and recent matches for a sport, distinguishing it from siblings like get_match_detail (single match) and get_standings (table). The example query makes the purpose intuitive.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies it's the default for 'what's happening' queries, suggesting use over more specific siblings. However, it does not explicitly state when not to use it (e.g., for a single match, use get_match_detail).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_playerBInspect

Get player statistics and metadata by player slug (e.g. 'lionel-messi', 'lebron-james', 'virat-kohli').

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesPlayer slug.
sportYesSport to query. One of football, basketball, cricket, tennis.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses that the tool uses a slug for lookup and includes examples, but it doesn't describe side effects (likely none), required permissions, or behavior on missing slugs. It doesn't mention if it returns a single player or multiple, or any potential pagination.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that efficiently conveys the tool's purpose and includes helpful examples. It is front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there are only two parameters and no output schema, the description is minimally adequate. It covers the basic purpose and parameter usage, but could be expanded to mention the type of data returned or behavior for invalid slugs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add any extra meaning beyond what the schema already provides for the parameters. It gives examples of slug values but does not clarify the format or constraints further.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves player statistics and metadata, identified by a player slug. It specifies the verb 'get' and the resource 'player statistics and metadata', and provides examples of valid slugs. However, it does not differentiate from sibling tools like get_bracket or get_match_detail, which are likely for different entities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you need player stats by slug, but it does not explicitly state when to use this tool versus alternatives. For example, it doesn't mention that get_tracker or get_top_scorers might be more appropriate for high-level stats. No guidance on prerequisites or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_standingsAInspect

Get the current standings table for a league or competition by slug (e.g. 'premier-league', 'la-liga', 'nba').

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesCompetition slug.
sportYesSport to query. One of football, basketball, cricket, tennis.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description is straightforward but does not disclose return format, pagination, or data update frequency. Suffices for a simple read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with key information and example slugs; no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple parameters and no output schema, description adequately covers tool behavior for a standings lookup.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage; description adds example slugs, but does not elaborate on slug format or sport values beyond what enum provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb (Get), resource (standings table), and context (league/competition by slug). Provides examples of valid slugs, though does not explicitly differentiate from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for retrieving standings by slug and sport, but lacks guidance on when to use alternatives like get_matches or get_bracket.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_team_scheduleAInspect

Get a team's past and upcoming fixtures by team slug (e.g. 'barcelona', 'manchester-united', 'los-angeles-lakers').

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesTeam slug.
limitNo
sportYesSport to query. One of football, basketball, cricket, tennis.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It describes the output (fixtures, both past and upcoming) and the required slug format. However, it does not mention any destructive behavior, rate limits, or authentication needs. Since the tool is clearly a read operation, the lack of annotations is partially mitigated, but more detail on output format (e.g., date range) would improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently states the purpose and provides examples. It is front-loaded with the action and resource. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters, no output schema, and no annotations, the description adequately covers the key inputs and output nature. However, it does not explain the return format (e.g., list of match objects) or mention pagination, which might be needed for large schedules. The complexity is moderate, so a score of 3 is appropriate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaning beyond the schema by explaining how the slug parameter works with examples. It also hints at the limit parameter's role in controlling the number of fixtures returned, though it does not explicitly describe it. With 67% schema coverage, the description compensates by clarifying the core parameter 'slug'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool gets a team's past and upcoming fixtures by team slug. It provides specific examples of valid slugs and implies the resource is a team schedule, distinguishing it from siblings like get_matches which may not be team-specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives a clear use case with examples but does not explicitly state when not to use this tool or mention alternatives for broader fixture queries. The context of team-specific scheduling is implied but no direct comparison to siblings is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_top_scorersAInspect

Get the top scorers (or top assisters) for a competition. Useful for 'who's leading the Premier League scoring charts?'.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesCompetition slug.
statNogoals
limitNo
sportYesSport to query. One of football, basketball, cricket, tennis.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so description must disclose behavior. It mentions retrieving top scorers and assisters, implying a read-only, sorting operation. However, it does not state ordering (descending), pagination, or that results are limited by default, which are important behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with a helpful example in quotes. It is concise but could be slightly more structured. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters, 50% schema coverage, and no output schema, the description partially compensates by clarifying the stat parameter and example. However, it lacks details like default sorting, pagination, and use of the limit parameter, leaving gaps for a complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 50% (stat and limit have descriptions, slug and sport do not). The description adds context by explaining the stat parameter ('top scorers or top assisters') and shows the query intent ('who's leading...?'). However, it does not elaborate on the slug or sport parameters beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves top scorers or assisters for a competition, with a specific example ('who's leading the Premier League scoring charts?'). It distinguishes itself from sibling tools like get_standings or get_player by focusing on statistical leaders.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives a context hint ('useful for...') but does not explain when to use this vs. siblings like get_matches or get_player. No explicit alternatives or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trackerAInspect

Get live match tracker data (position, animation frames) for a match by numeric id. Usually only useful for football.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesNumeric match id from the upstream provider.
sportYesSport to query. One of football, basketball, cricket, tennis.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description must disclose behavioral traits. It indicates the data is 'live' and 'usually only useful for football,' but does not mention any side effects, rate limits, or requirements like authentication, which are not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that efficiently conveys the tool's purpose and a usage hint, with no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description does not mention the structure of response data. However, for a simple tool with two required parameters, the description is adequate but could benefit from mentioning what the response contains.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already explains the parameters. The description adds little beyond that, implying 'id' is a numeric match ID and 'sport' is chosen from the enum, which is already clear from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it retrieves 'live match tracker data (position, animation frames)' for a match by numeric id, specifying the data type and the tool's scope. It is clear and distinguishes from siblings like get_match_detail, which likely provides different match information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes 'usually only useful for football,' implying context where it's most relevant, but does not provide explicit guidance on when not to use it or alternatives for other sports.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources