Skip to main content
Glama

chess

Server Details

Chess.com MCP — wraps the Chess.com public API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-chess
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.4/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_games retrieves game histories, get_leaderboards provides top player rankings, get_player fetches profile information, and get_stats gives performance metrics. There is no overlap or ambiguity between these functions.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with 'get_' prefix (get_games, get_leaderboards, get_player, get_stats). The naming is uniform and predictable throughout the set.

Tool Count4/5

Four tools is a reasonable number for a chess server, covering key areas like profiles, games, stats, and leaderboards. It is slightly lean but well-scoped, with each tool serving a clear purpose.

Completeness3/5

The tools cover retrieval of player data, games, stats, and leaderboards, but there are notable gaps such as no create, update, or delete operations (e.g., for managing games or profiles), which limits interactive or management capabilities in the chess domain.

Available Tools

4 tools
get_gamesBInspect

Get a player's completed games for a specific month. Returns game URLs, time controls, results, and player ratings.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearYesYear (e.g., 2024)
monthYesMonth as a number (1-12)
usernameYesChess.com username
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool returns game data (URLs, time controls, results, ratings) but doesn't disclose behavioral traits like whether it's read-only (implied by 'Get'), rate limits, authentication needs, error conditions, pagination, or data freshness. For a tool with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose and followed by return details. Every word earns its place with zero waste, making it highly efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 required parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the purpose and return data but lacks behavioral context (e.g., rate limits, errors) and usage guidelines. Without annotations or output schema, more detail would be helpful for safe and effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters (username, year, month) with clear descriptions. The description adds no additional parameter semantics beyond implying the temporal filtering context, which is already covered by the schema. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get'), resource ('player's completed games'), and scope ('for a specific month'), distinguishing it from siblings like get_leaderboards (leaderboard data), get_player (player profile), and get_stats (statistics). It provides a complete picture of what the tool does.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions 'for a specific month' which implies temporal context, but provides no explicit guidance on when to use this tool versus alternatives like get_stats (which might include game data) or get_player (which might include game history). There's no mention of prerequisites, limitations, or comparative use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_leaderboardsBInspect

Get the top-ranked Chess.com players across game formats including daily, rapid, blitz, and bullet.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes a read operation ('Get'), implying it's non-destructive, but doesn't mention any behavioral traits such as rate limits, authentication needs, pagination, or what happens if no data is available. For a tool with zero annotation coverage, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that states the tool's purpose clearly without unnecessary details. It's front-loaded with the core action and resource, and every word earns its place by specifying the scope (game formats). There's no waste or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple read operation with no parameters) and lack of annotations/output schema, the description is minimally adequate. It explains what the tool does but lacks details on behavioral aspects and usage context. Without an output schema, it doesn't describe return values, which could be a gap, but the simplicity of the tool makes this less critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and the schema description coverage is 100% (since there are no parameters to describe). The description adds no parameter information, which is appropriate here. Baseline for 0 parameters is 4, as there's nothing to compensate for, and the description doesn't need to cover parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get the top-ranked Chess.com players across game formats including daily, rapid, blitz, and bullet.' It specifies the verb ('Get') and resource ('top-ranked Chess.com players'), and mentions the scope of game formats. However, it doesn't explicitly differentiate from sibling tools like get_stats (which might provide statistical data rather than rankings).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like get_player or get_stats. It mentions the types of rankings (daily, rapid, etc.) but doesn't specify use cases, prerequisites, or exclusions. Without this context, an agent might struggle to choose between this and sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_playerAInspect

Get a Chess.com player's public profile including name, title, followers, country, join date, and last online time.

ParametersJSON Schema
NameRequiredDescriptionDefault
usernameYesChess.com username (case-insensitive, e.g., "hikaru", "magnuscarlsen")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a read operation for public data, which implies no destructive actions or authentication needs, but does not mention rate limits, error conditions, or response format details. It adds basic context about what data is returned, but lacks depth on operational traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose and key data points without unnecessary words. It is front-loaded with the main action and resource, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema, no annotations), the description is adequate but has gaps. It specifies what data is returned, which helps compensate for the lack of output schema, but does not cover behavioral aspects like error handling or performance constraints, leaving room for improvement in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'username' fully documented in the schema. The description does not add extra parameter details beyond what the schema provides, but since there is only one parameter and the schema covers it well, a baseline of 4 is appropriate as the description doesn't need to compensate for gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get') and resource ('Chess.com player's public profile') with explicit details about what information is retrieved ('name, title, followers, country, join date, and last online time'). It distinguishes itself from sibling tools like 'get_games', 'get_leaderboards', and 'get_stats' by focusing on profile data rather than game history, rankings, or statistics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving public profile data, but does not explicitly state when to use this tool versus alternatives like 'get_stats' for performance metrics or 'get_games' for match history. No exclusions or prerequisites are mentioned, leaving some ambiguity about context-specific selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_statsBInspect

Get a player's game statistics including current rating, best rating, and win/loss/draw record for daily, rapid, blitz, and bullet formats.

ParametersJSON Schema
NameRequiredDescriptionDefault
usernameYesChess.com username
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes what data is returned but doesn't mention error handling (e.g., for invalid usernames), rate limits, authentication requirements, or whether the data is cached or real-time. This leaves significant gaps in understanding the tool's operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose and scope without unnecessary words. It front-loads the key action and resource, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (retrieving multi-format statistics) and lack of annotations or output schema, the description adequately covers what data is returned but falls short on behavioral aspects like error handling and performance. It's complete enough for basic use but lacks depth for robust agent interaction.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'username' clearly documented as 'Chess.com username'. The description adds no additional semantic context beyond this, such as format constraints or examples. Since the schema does the heavy lifting, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get') and resource ('a player's game statistics'), listing the exact data points returned (rating, best rating, win/loss/draw record) and the formats covered (daily, rapid, blitz, bullet). It distinguishes from sibling tools like 'get_games' (which likely returns game details) and 'get_player' (which likely returns profile information).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_player' or 'get_games'. It doesn't mention prerequisites, such as whether the username must exist or be valid, nor does it specify any context for when this tool is appropriate versus other statistical tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.