Skip to main content
Glama
lenwood

cfbd-mcp-server

by lenwood

get-pregame-win-probability

Retrieve pregame win probability data for college football games using parameters like year, week, or team to analyze matchups before kickoff.

Instructions

Note: When using this tool, please explicitly mention that you are retrieving data from the College Football Data API. You must mention "College Football Data API" in every response.

Get college football pregame win probability data.
        Optional: year, week, team, season_type
        At least one parameter is required
        Example valid queries:
        - year=2023
        - team="Alabama"
        - year=2023, week=1
        

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
yearNo
weekNo
teamNo
season_typeNo

Implementation Reference

  • The MCP call_tool handler function that executes the tool: maps name to schema for validation, maps to API endpoint /metrics/wp/pregame, calls CFBD API with params, returns JSON data or error
    @server.call_tool()
    async def handle_call_tool(
        name: str,
        arguments: dict[str, Any] | None
    ) -> list[types.TextContent]:
        """Handle tool execution requests."""
        if not arguments:
            raise ValueError("Arguments are required")
    
        # Map tool names to their parameter schemas
        schema_map = {
            "get-games": getGames,
            "get-records": getTeamRecords,
            "get-games-teams": getGamesTeams,
            "get-plays": getPlays,
            "get-drives": getDrives,
            "get-play-stats": getPlayStats,
            "get-rankings": getRankings,
            "get-pregame-win-probability": getMetricsPregameWp,
            "get-advanced-box-score": getAdvancedBoxScore
        }
    
        if name not in schema_map:
            raise ValueError(f"Unknown tool: {name}")
    
        # Validate parameters against schema
        try:
            validated_params = validate_params(arguments, schema_map[name])
        except ValueError as e:
            return [types.TextContent(
                type="text",
                text=f"Validation error: {str(e)}"
            )]
    
        endpoint_map = {
            "get-games": "/games",
            "get-records": "/records",
            "get-games-teams": "/games/teams",
            "get-plays": "/plays",
            "get-drives": "/drives",
            "get-play-stats": "/play/stats",
            "get-rankings": "/rankings",
            "get-pregame-win-probability": "/metrics/wp/pregame",
            "get-advanced-box-score": "/game/box/advanced"
        }
       
        async with await get_api_client() as client:
            try:
                response = await client.get(endpoint_map[name], params=arguments)
                response.raise_for_status()
                data = response.json()
                return [types.TextContent(
                    type="text",
                    text=str(data)
                )]
            except httpx.HTTPStatusError as e:
                if e.response.status_code == 401:
                    return [types.TextContent(
                        type="text",
                        text="401: API authentication failed. Please check your API key."
                    )]
                elif e.response.status_code == 403:
                    return [types.TextContent(
                        type="text",
                        text="403: API access forbidden. Please check your permission."
                    )]
                elif e.response.status_code == 429:
                    return [types.TextContent(
                        type="text",
                        text="429: Rate limit exceeded. Please try again later."
                    )]
                else:
                    return [types.TextContent(
                        type="text",
                        text=f"API Error: {e}"
                    )]
            except httpx.RequestError as e:
                return [types.TextContent(
                    type="text",
                    text=f"Network error: {str(e)}"
                )]
  • TypedDict schema defining optional input parameters: year, week, team, season_type
    class getMetricsPregameWp(TypedDict): # /metrics/wp/pregame endpoint
        year: Optional[int]
        week: Optional[int]
        team: Optional[str]
        season_type: Optional[str]
  • Registration of the tool in handle_list_tools(): defines name, description, and links to input schema
    types.Tool(
        name="get-pregame-win-probability",
        description=base_description + """Get college football pregame win probability data.
        Optional: year, week, team, season_type
        At least one parameter is required
        Example valid queries:
        - year=2023
        - team="Alabama"
        - year=2023, week=1
        """,
        inputSchema=create_tool_schema(getMetricsPregameWp)
    ),
  • TypedDict schema for the expected response from the API endpoint
    class MetricsPregameWpResponse(TypedDict): # /metrics/wp/pregame response
        season: int
        seasonType: str
        week: int
        gameId: int
        homeTeam: str
        awayTeam: str
        spread: float  # Using float since spread can be decimal
        homeWinProb: float  # Using float for probability (0-1)
  • Schema registration in handle_read_resource() for exposing parameter and response schemas via MCP resources
    "schema://metrics/wp/pregame": {
        "endpoint": "/metrics/wp/pregame",
        "parameters": getMetricsPregameWp.__annotations__,
        "response": MetricsPregameWpResponse.__annotations__,
        "description": "Get pregame win probability records for specified parameters"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the data source ('College Football Data API') and a requirement to cite it in responses, which adds useful context about attribution. However, it doesn't describe key behavioral traits like whether this is a read-only operation, potential rate limits, error handling, or the format of returned data (e.g., JSON structure). For a tool with no annotations, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately concise but includes redundant or misplaced content. The first sentence about mentioning the API in responses is important but could be integrated more smoothly. The parameter list and examples are helpful but could be structured better (e.g., bullet points for clarity). It's front-loaded with the core purpose, but some sentences don't directly enhance tool understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema coverage, no output schema, and 4 parameters, the description is incomplete. It covers the basic purpose and parameters superficially but misses critical details: no explanation of return values, error cases, or deeper behavioral context. For a data retrieval tool with multiple filters, this leaves too much undefined for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It lists the parameters (year, week, team, season_type) as optional and provides example queries, which adds some meaning beyond the bare schema. However, it doesn't explain what each parameter does (e.g., what 'season_type' entails, format for 'team'), leaving semantics unclear. With 4 parameters and low coverage, this is insufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get college football pregame win probability data.' This specifies the verb ('Get'), resource ('college football pregame win probability data'), and distinguishes it from sibling tools that handle box scores, drives, games, etc. However, it doesn't explicitly differentiate from hypothetical similar win probability tools, keeping it at 4 rather than 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance by listing optional parameters and stating 'At least one parameter is required,' which helps understand when to use it (i.e., when you have at least one of these filters). It includes example queries for context. However, it lacks explicit when-to-use vs. alternatives (e.g., compared to sibling tools like get-games) or prerequisites, so it's not fully comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lenwood/cfbd-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server