Skip to main content
Glama

Get Most Anticipated Games

get_most_anticipated_games

Fetch upcoming video games with high anticipation by retrieving titles sorted by hype count, filtered for future or TBA releases from the IGDB database.

Instructions

Fetch upcoming games sorted by hype count, filtered for future or TBA releases

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
fieldsNoComma-separated list of fields to returnid,slug,name,hypes,first_release_date,platforms.name,genres.name,status
limitNoMaximum number of results to return
min_hypesNoMinimum number of hypes required

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Registration of the 'get_most_anticipated_games' tool using the @mcp.tool decorator from FastMCP, which defines the tool's name, title, and description.
    @mcp.tool(
        name="get_most_anticipated_games",
        title="Get Most Anticipated Games",
        description="Fetch upcoming games sorted by hype count, filtered for future or TBA releases"
    )
  • The core handler function implementing the tool logic: retrieves IGDB client, builds an Apicalypse query to filter games by minimum hypes, future release dates or TBA status, sorts by hypes descending, and executes the API request.
    async def get_most_anticipated_games(
        ctx: Context,
        fields: Annotated[
            str,
            Field(description="Comma-separated list of fields to return"),
        ] = "id,slug,name,hypes,first_release_date,platforms.name,genres.name,status",
        limit: Annotated[
            int, Field(description="Maximum number of results to return", ge=1, le=500)
        ] = 25,
        min_hypes: Annotated[
            int, Field(description="Minimum number of hypes required", ge=0)
        ] = 25,
    ) -> List[Dict[str, Any]]:
        """
        Get the most anticipated upcoming games based on hype count.
        Automatically filters for future or TBA releases.
    
        Args:
            ctx: Context for accessing session configuration
            fields: Comma-separated list of fields to return
            limit: Maximum number of results to return (default: 25, max: 500)
            min_hypes: Minimum number of hypes required (default: 25)
    
        Returns:
            List of most anticipated games sorted by hype count
        """
        igdb_client = get_igdb_client(ctx)
    
        # Get current timestamp
        current_timestamp = int(time.time())
    
        # Build query: games with hypes that are either future releases or TBA
        query = (
            f"fields {fields}; "
            f"where hypes >= {min_hypes} & "
            f"(status = null | status != 0) & "
            f"(first_release_date > {current_timestamp} | first_release_date = null); "
            f"sort hypes desc; "
            f"limit {limit};"
        )
    
        return await igdb_client.make_request("games", query)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions sorting and filtering behavior, but lacks critical details: it doesn't specify if this is a read-only operation, whether it requires authentication, rate limits, pagination, or what happens with invalid parameters. For a tool with no annotations, this leaves significant behavioral gaps unaddressed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every element ('Fetch upcoming games', 'sorted by hype count', 'filtered for future or TBA releases') contributes directly to understanding the tool's function. It's appropriately sized for its complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage and an output schema exists, the description doesn't need to explain parameters or return values. However, with no annotations and a tool that involves filtering and sorting, it should provide more behavioral context (e.g., read-only nature, error handling). It's minimally adequate but has clear gaps in transparency for a tool with this functionality.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters. The description adds no parameter-specific information beyond what's in the schema (e.g., it doesn't explain 'hypes' or 'TBA' in more detail). Baseline score of 3 is appropriate as the schema does the heavy lifting, but the description doesn't compensate with additional semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Fetch'), resource ('upcoming games'), and key sorting/filtering criteria ('sorted by hype count, filtered for future or TBA releases'). It distinguishes from siblings like 'get_game_details' (specific game) and 'search_games' (general search), though it doesn't explicitly name alternatives. The purpose is specific but could be slightly more distinct from 'custom_query'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving highly anticipated games based on hype, but provides no explicit guidance on when to use this tool versus alternatives like 'search_games' or 'custom_query'. It mentions filtering criteria but doesn't state prerequisites, exclusions, or comparative scenarios. Usage is contextually implied rather than explicitly defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bielacki/igdb-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server