Skip to main content
Glama

get_analysis

Analyze Formula 1 race data to assess driver performance metrics including pace, tire degradation, stint summaries, and consistency across specified sessions and seasons.

Instructions

Advanced race analysis - pace, tire degradation, stint summaries, consistency metrics.

Args: year: Season year (2018+) gp: Grand Prix name or round session: 'FP1', 'FP2', 'FP3', 'Q', 'S', 'R' analysis_type: 'race_pace', 'tire_degradation', 'stint_summary', 'consistency' driver: Driver code/number (optional, all drivers if None)

Returns: AnalysisResponse with pace data, degradation, stints, or consistency stats

Examples: get_analysis(2024, "Monaco", "R", "race_pace") → Pace analysis for all drivers get_analysis(2024, "Monza", "R", "tire_degradation", driver="VER") → VER's tire wear

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
yearYes
gpYes
sessionYes
analysis_typeYes
driverNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
yearYesSeason year
race_paceNoRace pace data
event_nameYesEvent name
consistencyNoConsistency data
session_nameYesSession name
analysis_typeYesType: 'race_pace', 'tire_degradation', 'stint_summary', 'consistency'
driver_filterNoDriver filter (if any)
total_recordsYesTotal number of records
stint_summariesNoStint summary data
tire_degradationNoTire degradation data

Implementation Reference

  • The core handler function implementing get_analysis tool. Loads FastF1 session data and computes race pace (avg/median/fastest laps), tire degradation per stint, stint summaries, and lap time consistency (std dev, CV) for specified or all drivers.
    def get_analysis(
        year: int,
        gp: Union[str, int],
        session: str,
        analysis_type: Literal["race_pace", "tire_degradation", "stint_summary", "consistency"],
        driver: Optional[Union[str, int]] = None,
    ) -> AnalysisResponse:
        """
        Advanced race analysis - pace, tire degradation, stint summaries, consistency metrics.
    
        Args:
            year: Season year (2018+)
            gp: Grand Prix name or round
            session: 'FP1', 'FP2', 'FP3', 'Q', 'S', 'R'
            analysis_type: 'race_pace', 'tire_degradation', 'stint_summary', 'consistency'
            driver: Driver code/number (optional, all drivers if None)
    
        Returns:
            AnalysisResponse with pace data, degradation, stints, or consistency stats
    
        Examples:
            get_analysis(2024, "Monaco", "R", "race_pace") → Pace analysis for all drivers
            get_analysis(2024, "Monza", "R", "tire_degradation", driver="VER") → VER's tire wear
        """
        # Load session with lap data
        session_obj = fastf1_client.get_session(year, gp, session)
        session_obj.load(laps=True, telemetry=False, weather=False, messages=False)
    
        event = session_obj.event
    
        # Get laps based on driver filter
        if driver:
            laps = session_obj.laps.pick_drivers(driver)
        else:
            laps = session_obj.laps
    
        if analysis_type == "race_pace":
            # Calculate race pace (excluding pit laps and inaccurate laps)
            race_pace_list = []
    
            if driver:
                drivers_to_analyze = [driver]
            else:
                drivers_to_analyze = laps['Driver'].unique()
    
            for drv in drivers_to_analyze:
                try:
                    driver_laps = laps.pick_drivers(drv)
    
                    # Filter for clean laps: no pit stops, accurate timing, not deleted
                    clean_laps = driver_laps[
                        (pd.isna(driver_laps['PitInTime'])) &
                        (pd.isna(driver_laps['PitOutTime'])) &
                        (driver_laps['IsAccurate'] == True) &
                        (driver_laps['Deleted'] == False)
                    ]
    
                    if len(clean_laps) > 0:
                        # Convert lap times to seconds for calculation
                        lap_times_seconds = clean_laps['LapTime'].dt.total_seconds()
    
                        avg_time = lap_times_seconds.mean()
                        median_time = lap_times_seconds.median()
                        fastest_time = lap_times_seconds.min()
    
                        race_pace_list.append(
                            RacePaceData(
                                driver=str(clean_laps.iloc[0]['Driver']),
                                driver_number=str(clean_laps.iloc[0]['DriverNumber']),
                                average_lap_time=str(pd.Timedelta(seconds=avg_time)),
                                median_lap_time=str(pd.Timedelta(seconds=median_time)),
                                fastest_lap_time=str(pd.Timedelta(seconds=fastest_time)),
                                total_laps=len(driver_laps),
                                clean_laps=len(clean_laps),
                            )
                        )
                except Exception:
                    continue
    
            return AnalysisResponse(
                session_name=session_obj.name,
                event_name=event['EventName'],
                year=year,
                analysis_type=analysis_type,
                race_pace=race_pace_list,
                total_records=len(race_pace_list),
                driver_filter=str(driver) if driver else None,
            )
    
        elif analysis_type == "tire_degradation":
            # Analyze tire degradation per stint
            degradation_list = []
    
            if driver:
                drivers_to_analyze = [driver]
            else:
                drivers_to_analyze = laps['Driver'].unique()
    
            for drv in drivers_to_analyze:
                try:
                    driver_laps = laps.pick_drivers(drv)
    
                    # Group by stint
                    stints = driver_laps['Stint'].unique()
    
                    for stint in stints:
                        if pd.notna(stint):
                            stint_laps = driver_laps[driver_laps['Stint'] == stint]
    
                            # Filter clean laps for analysis
                            clean_stint_laps = stint_laps[
                                (pd.isna(stint_laps['PitInTime'])) &
                                (pd.isna(stint_laps['PitOutTime'])) &
                                (stint_laps['IsAccurate'] == True) &
                                (stint_laps['Deleted'] == False)
                            ]
    
                            if len(clean_stint_laps) >= 2:
                                first_lap = clean_stint_laps.iloc[0]
                                last_lap = clean_stint_laps.iloc[-1]
    
                                first_time = first_lap['LapTime'].total_seconds() if pd.notna(first_lap['LapTime']) else None
                                last_time = last_lap['LapTime'].total_seconds() if pd.notna(last_lap['LapTime']) else None
    
                                degradation = None
                                if first_time and last_time:
                                    deg_seconds = last_time - first_time
                                    degradation = str(pd.Timedelta(seconds=deg_seconds))
    
                                avg_time = clean_stint_laps['LapTime'].dt.total_seconds().mean()
    
                                degradation_list.append(
                                    TireDegradationData(
                                        driver=str(first_lap['Driver']),
                                        driver_number=str(first_lap['DriverNumber']),
                                        stint=int(stint),
                                        compound=str(first_lap['Compound']) if pd.notna(first_lap.get('Compound')) else None,
                                        first_lap_time=str(first_lap['LapTime']) if pd.notna(first_lap['LapTime']) else None,
                                        last_lap_time=str(last_lap['LapTime']) if pd.notna(last_lap['LapTime']) else None,
                                        average_lap_time=str(pd.Timedelta(seconds=avg_time)),
                                        degradation=degradation,
                                        stint_length=len(clean_stint_laps),
                                    )
                                )
                except Exception:
                    continue
    
            return AnalysisResponse(
                session_name=session_obj.name,
                event_name=event['EventName'],
                year=year,
                analysis_type=analysis_type,
                tire_degradation=degradation_list,
                total_records=len(degradation_list),
                driver_filter=str(driver) if driver else None,
            )
    
        elif analysis_type == "stint_summary":
            # Summarize each stint
            stint_summaries_list = []
    
            if driver:
                drivers_to_analyze = [driver]
            else:
                drivers_to_analyze = laps['Driver'].unique()
    
            for drv in drivers_to_analyze:
                try:
                    driver_laps = laps.pick_drivers(drv)
    
                    # Group by stint
                    stints = driver_laps['Stint'].unique()
    
                    for stint in stints:
                        if pd.notna(stint):
                            stint_laps = driver_laps[driver_laps['Stint'] == stint]
    
                            # Filter clean laps
                            clean_stint_laps = stint_laps[
                                (pd.isna(stint_laps['PitInTime'])) &
                                (pd.isna(stint_laps['PitOutTime'])) &
                                (stint_laps['IsAccurate'] == True)
                            ]
    
                            if len(clean_stint_laps) > 0:
                                avg_time = clean_stint_laps['LapTime'].dt.total_seconds().mean()
                                fastest_time = clean_stint_laps['LapTime'].dt.total_seconds().min()
    
                                stint_summaries_list.append(
                                    StintSummary(
                                        driver=str(clean_stint_laps.iloc[0]['Driver']),
                                        driver_number=str(clean_stint_laps.iloc[0]['DriverNumber']),
                                        stint=int(stint),
                                        compound=str(clean_stint_laps.iloc[0]['Compound']) if pd.notna(clean_stint_laps.iloc[0].get('Compound')) else None,
                                        stint_length=len(clean_stint_laps),
                                        average_lap_time=str(pd.Timedelta(seconds=avg_time)),
                                        fastest_lap_time=str(pd.Timedelta(seconds=fastest_time)),
                                    )
                                )
                except Exception:
                    continue
    
            return AnalysisResponse(
                session_name=session_obj.name,
                event_name=event['EventName'],
                year=year,
                analysis_type=analysis_type,
                stint_summaries=stint_summaries_list,
                total_records=len(stint_summaries_list),
                driver_filter=str(driver) if driver else None,
            )
    
        elif analysis_type == "consistency":
            # Analyze driver consistency
            consistency_list = []
    
            if driver:
                drivers_to_analyze = [driver]
            else:
                drivers_to_analyze = laps['Driver'].unique()
    
            for drv in drivers_to_analyze:
                try:
                    driver_laps = laps.pick_drivers(drv)
    
                    # Filter clean laps
                    clean_laps = driver_laps[
                        (pd.isna(driver_laps['PitInTime'])) &
                        (pd.isna(driver_laps['PitOutTime'])) &
                        (driver_laps['IsAccurate'] == True) &
                        (driver_laps['Deleted'] == False)
                    ]
    
                    if len(clean_laps) >= 3:  # Need at least 3 laps for meaningful stats
                        lap_times_seconds = clean_laps['LapTime'].dt.total_seconds()
    
                        avg_time = lap_times_seconds.mean()
                        std_dev = lap_times_seconds.std()
                        coefficient_of_variation = (std_dev / avg_time) * 100 if avg_time > 0 else None
    
                        consistency_list.append(
                            ConsistencyData(
                                driver=str(clean_laps.iloc[0]['Driver']),
                                driver_number=str(clean_laps.iloc[0]['DriverNumber']),
                                average_lap_time=str(pd.Timedelta(seconds=avg_time)),
                                std_deviation=float(std_dev),
                                coefficient_of_variation=float(coefficient_of_variation) if coefficient_of_variation else None,
                                total_laps=len(clean_laps),
                            )
                        )
                except Exception:
                    continue
    
            # Sort by coefficient of variation (most consistent first)
            consistency_list.sort(key=lambda x: x.coefficient_of_variation if x.coefficient_of_variation else 999)
    
            return AnalysisResponse(
                session_name=session_obj.name,
                event_name=event['EventName'],
                year=year,
                analysis_type=analysis_type,
                consistency=consistency_list,
                total_records=len(consistency_list),
                driver_filter=str(driver) if driver else None,
            )
  • Pydantic BaseModel schemas defining the structured output for get_analysis tool responses, including main AnalysisResponse and specialized data classes for each analysis type.
    from pydantic import BaseModel, Field
    from typing import Optional
    
    
    class RacePaceData(BaseModel):
        """Race pace analysis data."""
    
        driver: str = Field(..., description="Driver abbreviation")
        driver_number: str = Field(..., description="Driver number")
        average_lap_time: Optional[str] = Field(None, description="Average lap time (excluding pit laps)")
        median_lap_time: Optional[str] = Field(None, description="Median lap time")
        fastest_lap_time: Optional[str] = Field(None, description="Fastest lap time")
        total_laps: int = Field(..., description="Total number of laps")
        clean_laps: int = Field(..., description="Number of clean laps (no pit stops, accurate timing)")
    
    
    class TireDegradationData(BaseModel):
        """Tire degradation analysis data."""
    
        driver: str = Field(..., description="Driver abbreviation")
        driver_number: str = Field(..., description="Driver number")
        stint: int = Field(..., description="Stint number")
        compound: Optional[str] = Field(None, description="Tire compound")
        first_lap_time: Optional[str] = Field(None, description="First lap time on this stint")
        last_lap_time: Optional[str] = Field(None, description="Last lap time on this stint")
        average_lap_time: Optional[str] = Field(None, description="Average lap time for stint")
        degradation: Optional[str] = Field(None, description="Estimated degradation (last - first lap)")
        stint_length: int = Field(..., description="Number of laps in stint")
    
    
    class StintSummary(BaseModel):
        """Summary of a tire stint."""
    
        driver: str = Field(..., description="Driver abbreviation")
        driver_number: str = Field(..., description="Driver number")
        stint: int = Field(..., description="Stint number")
        compound: Optional[str] = Field(None, description="Tire compound")
        stint_length: int = Field(..., description="Number of laps in stint")
        average_lap_time: Optional[str] = Field(None, description="Average lap time")
        fastest_lap_time: Optional[str] = Field(None, description="Fastest lap in stint")
    
    
    class ConsistencyData(BaseModel):
        """Driver consistency analysis."""
    
        driver: str = Field(..., description="Driver abbreviation")
        driver_number: str = Field(..., description="Driver number")
        average_lap_time: Optional[str] = Field(None, description="Average lap time")
        std_deviation: Optional[float] = Field(None, description="Standard deviation of lap times (seconds)")
        coefficient_of_variation: Optional[float] = Field(None, description="Consistency score (lower is better)")
        total_laps: int = Field(..., description="Total laps analyzed")
    
    
    class AnalysisResponse(BaseModel):
        """Response containing advanced race analysis."""
    
        session_name: str = Field(..., description="Session name")
        event_name: str = Field(..., description="Event name")
        year: int = Field(..., description="Season year")
        analysis_type: str = Field(..., description="Type: 'race_pace', 'tire_degradation', 'stint_summary', 'consistency'")
    
        # Optional data based on type
        race_pace: Optional[list[RacePaceData]] = Field(None, description="Race pace data")
        tire_degradation: Optional[list[TireDegradationData]] = Field(None, description="Tire degradation data")
        stint_summaries: Optional[list[StintSummary]] = Field(None, description="Stint summary data")
        consistency: Optional[list[ConsistencyData]] = Field(None, description="Consistency data")
    
        # Metadata
        total_records: int = Field(..., description="Total number of records")
        driver_filter: Optional[str] = Field(None, description="Driver filter (if any)")
  • server.py:160-160 (registration)
    Registers the get_analysis handler as an MCP tool using the FastMCP decorator.
    mcp.tool()(get_analysis)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the tool returns 'AnalysisResponse' but doesn't describe format, pagination, rate limits, authentication needs, or error conditions. The examples help but don't fully compensate for missing behavioral context about what 'advanced analysis' entails operationally.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (description, Args, Returns, Examples). The description is front-loaded with key information. Some redundancy exists between the initial description line and the Args section, but overall efficient with each sentence adding value. Could be slightly more concise in the opening line.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (5 parameters, 4 required), no annotations, but with output schema present, the description provides good coverage. The parameter semantics are well-explained, and examples illustrate usage. Missing behavioral context about rate limits or authentication lowers the score, but overall adequate for the tool's analytical purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed parameter explanations: year constraints (2018+), gp format (name or round), session enum values, analysis_type enum with meanings, and driver optionality. The Args section adds significant value beyond the bare schema, explaining what each parameter means and how to use them.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'Advanced race analysis' with specific analysis types (pace, tire degradation, stint summaries, consistency metrics). It distinguishes from siblings like get_laps or get_session_results by focusing on analytical metrics rather than raw data. However, it doesn't explicitly differentiate from compare_driver_telemetry which might also involve analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through examples showing when to use specific analysis types, but lacks explicit guidance on when to choose this tool over alternatives like get_tire_strategy or compare_driver_telemetry. No 'when-not' scenarios or prerequisites are mentioned, leaving the agent to infer appropriate contexts from the parameter descriptions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/praneethravuri/pitstop'

If you have feedback or need assistance with the MCP directory API, please join our Discord server