gamedeals
Server Details
Gamedeals MCP — wraps CheapShark API (game deal aggregator, no auth required)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-gamedeals
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 4 of 4 tools scored.
Each tool has a distinct and non-overlapping purpose: get_game_details retrieves comprehensive price data for a specific game, list_stores provides store metadata, search_deals finds deals with filters, and search_games searches for games by title. There is no ambiguity in tool selection as they target different resources and actions.
All tool names follow a consistent verb_noun pattern with snake_case: get_game_details, list_stores, search_deals, and search_games. The verbs (get, list, search) are appropriate and predictable, making the set easy to navigate.
With 4 tools, this server is well-scoped for its purpose of accessing game deals and related information. Each tool serves a clear function without redundancy, and the count is appropriate for the domain, covering core operations without being overwhelming or insufficient.
The tool set covers essential operations for game deals: searching games and deals, getting detailed price information, and listing stores. A minor gap exists in update or management functions (e.g., tracking deals or user preferences), but for a read-only API, it provides complete coverage of core workflows.
Available Tools
4 toolsget_game_detailsAInspect
Get full price details for a game including price history, cheapest price ever recorded, and current deals across all stores.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | CheapShark game ID (obtained from search_games) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what data is returned (price details, history, deals) but does not cover important traits such as rate limits, authentication needs, error handling, or whether this is a read-only operation. The description adds value by specifying the scope of data but misses key behavioral aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose without unnecessary words. It is front-loaded with the main action ('Get full price details') and lists specific data points clearly, making it easy to understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is adequate but has gaps. It explains what data is returned but does not address behavioral aspects like rate limits or error handling. Without annotations or output schema, the description should provide more context on how the tool behaves, but it partially compensates by detailing the data scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'id' parameter documented as 'CheapShark game ID (obtained from search_games)'. The description does not add any additional meaning beyond what the schema provides, as it does not mention parameters at all. Baseline 3 is appropriate since the schema handles parameter documentation effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get full price details') and resources ('for a game'), distinguishing it from siblings like 'search_games' (which finds games) and 'search_deals' (which finds deals). It explicitly lists the types of details returned: price history, cheapest price ever, and current deals across stores.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying that the ID should be 'obtained from search_games' in the schema, but it does not explicitly state when to use this tool versus alternatives like 'search_deals' for deals or 'list_stores' for store information. It provides some context but lacks explicit guidance on exclusions or comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_storesAInspect
List all game stores tracked by CheapShark. Returns store names and IDs for use with search_deals.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a list operation that returns data, implying it's read-only and non-destructive. However, it doesn't mention potential limitations like rate limits, authentication requirements, or whether the list is cached/real-time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place: the first states the action and resource, the second explains the return value and its purpose. No wasted words, front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter list tool with no annotations and no output schema, the description provides adequate context about what it does and why. It could be more complete by mentioning the format of the returned data or any behavioral constraints, but it covers the essential purpose and usage linkage well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters with 100% schema description coverage, so the schema already fully documents the absence of parameters. The description appropriately doesn't add parameter information beyond what the schema provides, maintaining focus on the tool's purpose and output.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all game stores') and resource ('tracked by CheapShark'), distinguishing it from siblings like search_deals or search_games. It explicitly identifies what gets returned ('store names and IDs') and their purpose ('for use with search_deals').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('for use with search_deals'), establishing its role as a prerequisite for another sibling tool. However, it doesn't explicitly state when NOT to use it or mention alternatives like get_game_details for different purposes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_dealsBInspect
Search for game deals with optional filters. Returns deal title, store, sale price, normal price, savings percentage, Metacritic score, and deal rating.
| Name | Required | Description | Default |
|---|---|---|---|
| title | No | Filter deals by game title (partial match supported) | |
| sort_by | No | Sort order: "Deal Rating" (default), "Price", "Metacritic", or "Reviews" | |
| store_id | No | Filter by store ID (use list_stores to get IDs) | |
| page_size | No | Number of results to return (default: 10, max: 60) | |
| lower_price | No | Minimum price filter | |
| upper_price | No | Maximum price filter (e.g., 5 for deals under $5) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return fields (deal title, store, prices, etc.) which is helpful, but doesn't describe pagination behavior (though page_size parameter hints at it), rate limits, authentication requirements, or error conditions. For a search tool with 6 parameters, this leaves significant behavioral aspects undocumented.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise - two sentences that efficiently convey the core functionality and return format. The first sentence states the purpose, the second lists return fields. Every word earns its place with zero redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with 6 well-documented parameters but no annotations and no output schema, the description provides adequate but incomplete context. It covers what the tool does and what it returns, but lacks behavioral details (pagination, errors, limits) and sibling tool differentiation. The absence of an output schema means the description's return field listing is valuable, but overall completeness is just adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so all parameters are well-documented in the schema itself. The description adds minimal value beyond the schema - it mentions 'optional filters' which aligns with the schema's 0 required parameters, but doesn't provide additional context about parameter interactions or usage patterns. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search for game deals with optional filters.' It specifies the resource (game deals) and action (search). However, it doesn't explicitly differentiate from sibling tools like 'search_games' - both involve searching, but one is for deals and the other for games. The distinction is implied but not stated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There are three sibling tools (get_game_details, list_stores, search_games), but the description doesn't mention any of them or explain when this search_deals tool is appropriate versus searching for games directly. No context about prerequisites or exclusions is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_gamesAInspect
Search for games by title. Returns each game with its cheapest current price and a deal ID to get more details.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return (default: 10) | |
| query | Yes | Game title to search for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the return data (price and deal ID) but lacks details on error handling, rate limits, authentication needs, pagination, or whether the search is case-sensitive. For a search tool with zero annotation coverage, this leaves significant gaps in understanding its operational behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose and followed by return details. Every word earns its place with no redundancy or fluff, making it highly efficient and easy to parse for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search function with two parameters), no annotations, and no output schema, the description is adequate but incomplete. It covers the purpose and return data but lacks behavioral context (e.g., error cases, performance limits) and detailed output structure, which would be needed for full agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema fully documents both parameters ('query' for game title and 'limit' for result count). The description adds no additional parameter semantics beyond what the schema provides, such as search syntax or format examples, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search for games by title') and resource ('games'), distinguishing it from sibling tools like 'get_game_details' (detailed view), 'list_stores' (store listing), and 'search_deals' (deal-focused search). It explicitly mentions the return data (cheapest price and deal ID), making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for finding games by title, but it does not explicitly state when to use this tool versus alternatives like 'search_deals' or 'get_game_details'. There is no guidance on prerequisites, exclusions, or comparative contexts, leaving the agent to infer usage from the purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!