Skip to main content
Glama
lenwood

cfbd-mcp-server

by lenwood

get-rankings

Retrieve college football rankings data from the College Football Data API by specifying year, week, and season type parameters.

Instructions

Note: When using this tool, please explicitly mention that you are retrieving data from the College Football Data API. You must mention "College Football Data API" in every response.

Get college football rankings data.
        Required: year
        Optional: week, season_type
        Example valid queries:
        - year=2023
        - year=2023, week=1
        - year=2023, season_type="regular"
        

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
yearYes
weekNo
season_typeNo

Implementation Reference

  • Registration of the get-rankings tool in the list_tools handler, specifying input schema from getRankings TypedDict
    types.Tool(
        name="get-rankings",
        description=base_description + """Get college football rankings data.
        Required: year
        Optional: week, season_type
        Example valid queries:
        - year=2023
        - year=2023, week=1
        - year=2023, season_type="regular"
        """,
        inputSchema=create_tool_schema(getRankings)
  • TypedDict defining input parameters for get-rankings tool: required year (int), optional week (int) and season_type (str)
    class getRankings(TypedDict): # /rankings endpoint
        year: int
        week: Optional[int]
        season_type: Optional[str]
  • Tool name to schema mapping used for input validation in call_tool handler; get-rankings maps to getRankings TypedDict
    schema_map = {
        "get-games": getGames,
        "get-records": getTeamRecords,
        "get-games-teams": getGamesTeams,
        "get-plays": getPlays,
        "get-drives": getDrives,
        "get-play-stats": getPlayStats,
        "get-rankings": getRankings,
        "get-pregame-win-probability": getMetricsPregameWp,
        "get-advanced-box-score": getAdvancedBoxScore
  • Tool name to CFBD API endpoint mapping; get-rankings maps to /rankings
    endpoint_map = {
        "get-games": "/games",
        "get-records": "/records",
        "get-games-teams": "/games/teams",
        "get-plays": "/plays",
        "get-drives": "/drives",
        "get-play-stats": "/play/stats",
        "get-rankings": "/rankings",
        "get-pregame-win-probability": "/metrics/wp/pregame",
        "get-advanced-box-score": "/game/box/advanced"
  • Core execution logic in call_tool: performs authenticated GET request to mapped endpoint with validated parameters and returns JSON response as text
    response = await client.get(endpoint_map[name], params=arguments)
    response.raise_for_status()
    data = response.json()
    return [types.TextContent(
        type="text",
        text=str(data)
    )]
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the API source and attribution requirement, which is useful context, but doesn't describe important behavioral aspects like rate limits, authentication needs, response format, error conditions, or whether this is a read-only operation. The description focuses more on parameter usage than tool behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized but not optimally structured. The first paragraph contains API attribution requirements that belong elsewhere, while the actual tool description starts in the second paragraph. The example queries are helpful but could be more efficiently formatted. Overall, it's functional but could be more front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter tool with no annotations and no output schema, the description provides adequate parameter documentation but lacks important context about the tool's behavior and output. It doesn't describe what the rankings data looks like, how it's structured, or any limitations. The API attribution requirement is included, but other behavioral aspects are missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant value beyond the input schema, which has 0% description coverage. It clearly identifies which parameters are required vs. optional, provides example queries showing valid parameter combinations, and gives context about what each parameter represents (year, week, season_type). This compensates well for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get college football rankings data.' This specifies both the verb ('Get') and the resource ('college football rankings data'), making it immediately understandable. However, it doesn't distinguish this tool from its siblings (like get-games or get-plays), which all retrieve different types of college football data from the same API.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance through the example queries and parameter notes, showing how to structure requests. However, it lacks explicit guidance on when to use this tool versus alternatives (like get-games for game data vs. get-rankings for rankings data). The first paragraph contains API attribution requirements rather than usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lenwood/cfbd-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server