Skip to main content
Glama

get_tire_strategy

Analyze tire compounds, life, and stint data for Formula 1 sessions to understand race strategy and tire management decisions.

Instructions

Get tire strategy and compound usage for a session.

Analyzes tire compounds used throughout a session, including compound types, tire life, and stint information. Essential for understanding race strategy and tire management.

Args: year: The season year (2018 onwards) gp: The Grand Prix name or round number session: Session type - 'FP1', 'FP2', 'FP3', 'Q', 'S', 'R' driver: Optional driver identifier (3-letter code or number). If None, returns data for all drivers

Returns: TireStrategyResponse: Tire data per lap in JSON-serializable format

Examples: >>> # Get tire strategy for all drivers in 2024 Monza race >>> strategy = get_tire_strategy(2024, "Monza", "R")

>>> # Get Verstappen's tire strategy
>>> ver_strategy = get_tire_strategy(2024, "Monza", "R", "VER")

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
yearYes
gpYes
sessionYes
driverNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
driverNoDriver filter (if applied)
tire_dataYesTire data per lap
event_nameYesGrand Prix name
total_lapsYesTotal number of laps
session_nameYesSession name

Implementation Reference

  • The core handler function that implements the get_tire_strategy tool. It fetches session data using FastF1Client, extracts tire-related lap data, converts it to TireStint models, and returns a structured TireStrategyResponse.
    def get_tire_strategy(year: int, gp: Union[str, int], session: str, driver: Optional[Union[str, int]] = None) -> TireStrategyResponse:
        """
        Get tire strategy and compound usage for a session.
    
        Analyzes tire compounds used throughout a session, including compound types,
        tire life, and stint information. Essential for understanding race strategy
        and tire management.
    
        Args:
            year: The season year (2018 onwards)
            gp: The Grand Prix name or round number
            session: Session type - 'FP1', 'FP2', 'FP3', 'Q', 'S', 'R'
            driver: Optional driver identifier (3-letter code or number).
                   If None, returns data for all drivers
    
        Returns:
            TireStrategyResponse: Tire data per lap in JSON-serializable format
    
        Examples:
            >>> # Get tire strategy for all drivers in 2024 Monza race
            >>> strategy = get_tire_strategy(2024, "Monza", "R")
    
            >>> # Get Verstappen's tire strategy
            >>> ver_strategy = get_tire_strategy(2024, "Monza", "R", "VER")
        """
        session_obj = fastf1_client.get_session(year, gp, session)
        session_obj.load(laps=True, telemetry=False, weather=False, messages=False)
    
        event = session_obj.event
    
        if driver:
            laps = session_obj.laps.pick_drivers(driver)
        else:
            laps = session_obj.laps
    
        tire_data = laps[['Driver', 'LapNumber', 'Compound', 'TyreLife', 'FreshTyre']]
    
        # Convert to Pydantic models
        tire_stints = []
        for idx, row in tire_data.iterrows():
            stint = TireStint(
                driver=str(row['Driver']) if pd.notna(row.get('Driver')) else "",
                lap_number=int(row['LapNumber']) if pd.notna(row.get('LapNumber')) else 0,
                compound=str(row['Compound']) if pd.notna(row.get('Compound')) else None,
                tyre_life=float(row['TyreLife']) if pd.notna(row.get('TyreLife')) else None,
                fresh_tyre=bool(row['FreshTyre']) if pd.notna(row.get('FreshTyre')) else None,
            )
            tire_stints.append(stint)
    
        return TireStrategyResponse(
            session_name=session_obj.name,
            event_name=event['EventName'],
            driver=str(driver) if driver else None,
            tire_data=tire_stints,
            total_laps=len(tire_stints)
        )
  • Pydantic models defining the response schema for the tool: TireStint for individual tire stints and TireStrategyResponse for the overall strategy data.
    class TireStint(BaseModel):
        """Tire data for a single lap."""
    
        driver: str = Field(description="Driver abbreviation")
        lap_number: int = Field(description="Lap number")
        compound: Optional[str] = Field(None, description="Tire compound (SOFT, MEDIUM, HARD, INTERMEDIATE, WET)")
        tyre_life: Optional[float] = Field(None, description="Age of tire in laps")
        fresh_tyre: Optional[bool] = Field(None, description="Whether it's a new tire")
    
    
    class TireStrategyResponse(BaseModel):
        """Tire strategy response."""
    
        session_name: str = Field(description="Session name")
        event_name: str = Field(description="Grand Prix name")
        driver: Optional[str] = Field(None, description="Driver filter (if applied)")
        tire_data: list[TireStint] = Field(description="Tire data per lap")
        total_laps: int = Field(description="Total number of laps")
  • server.py:153-153 (registration)
    MCP tool registration decorator applied to the get_tire_strategy function, making it available as an MCP tool.
    mcp.tool()(get_tire_strategy)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool analyzes data (implying read-only behavior) and returns JSON-serializable format, but it doesn't mention potential limitations like data availability constraints (e.g., '2018 onwards'), rate limits, authentication needs, or error conditions. The description adds some context but lacks comprehensive behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement, analysis details, essential context, and organized sections for Args, Returns, and Examples. It's appropriately sized and front-loaded, though the 'Essential for understanding...' sentence could be considered slightly redundant given the initial clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no annotations, but with output schema), the description is fairly complete. It covers purpose, parameter semantics, return format, and provides examples. The output schema existence means the description doesn't need to detail return values, but it could benefit from more behavioral context (e.g., data sources, error handling).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate. It provides detailed semantics for all 4 parameters: 'year' (season year from 2018 onwards), 'gp' (Grand Prix name or round number), 'session' (session types like 'FP1', 'R'), and 'driver' (optional identifier, 3-letter code or number, returns all drivers if None). This adds significant meaning beyond the bare schema, though it could specify format examples for 'gp' and 'driver' more explicitly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'gets tire strategy and compound usage for a session' and specifies it analyzes 'tire compounds used throughout a session, including compound types, tire life, and stint information.' This provides a specific verb ('get/analyzes') and resource ('tire strategy and compound usage'), though it doesn't explicitly differentiate from sibling tools like 'get_stints_live' or 'get_laps'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating it's 'essential for understanding race strategy and tire management,' but it doesn't provide explicit guidance on when to use this tool versus alternatives like 'get_stints_live' or 'get_laps.' The examples show specific use cases, but no direct comparisons or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/praneethravuri/pitstop'

If you have feedback or need assistance with the MCP directory API, please join our Discord server