Skip to main content
Glama

get_live_pit_stops

Retrieve live pit stop timing and analysis for Formula 1 races, including crew performance metrics and driver-specific data from OpenF1.

Instructions

Get pit stop analysis with crew timing from OpenF1.

Args: year: Season year (2023+, OpenF1 data availability) country: Country name (e.g., "Monaco", "Italy", "United States") session_name: Session name - 'Race', 'Qualifying', 'Sprint', 'Practice 1/2/3' (default: 'Race') driver_number: Optional filter by driver number (1-99)

Returns: PitStopsResponse with pit stop durations and statistics

Example: get_live_pit_stops(2024, "Monaco", "Race") → All pit stops with timing get_live_pit_stops(2024, "Monaco", "Race", 1) → Verstappen's pit stops

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
yearYes
countryYes
session_nameNoRace
driver_numberNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
yearNoYear
countryNoCountry name
pit_stopsYesList of pit stops
fastest_stopNoFastest pit stop duration
session_nameNoSession name
slowest_stopNoSlowest pit stop duration
total_pit_stopsYesTotal number of pit stops
average_durationNoAverage pit stop duration

Implementation Reference

  • The main tool handler function that fetches pit stop data using OpenF1Client, converts to Pydantic models, calculates statistics like fastest/slowest/average pit stop durations, and returns PitStopsResponse.
    def get_live_pit_stops(
        year: int,
        country: str,
        session_name: str = "Race",
        driver_number: Optional[int] = None
    ) -> PitStopsResponse:
        """
        Get pit stop analysis with crew timing from OpenF1.
    
        Args:
            year: Season year (2023+, OpenF1 data availability)
            country: Country name (e.g., "Monaco", "Italy", "United States")
            session_name: Session name - 'Race', 'Qualifying', 'Sprint', 'Practice 1/2/3' (default: 'Race')
            driver_number: Optional filter by driver number (1-99)
    
        Returns:
            PitStopsResponse with pit stop durations and statistics
    
        Example:
            get_live_pit_stops(2024, "Monaco", "Race") → All pit stops with timing
            get_live_pit_stops(2024, "Monaco", "Race", 1) → Verstappen's pit stops
        """
        # Get meeting and session info
        meetings = openf1_client.get_meetings(year=year, country_name=country)
        if not meetings:
            return PitStopsResponse(
                session_name=session_name,
                year=year,
                country=country,
                pit_stops=[],
                total_pit_stops=0
            )
    
        # Get sessions for this meeting
        sessions = openf1_client.get_sessions(year=year, country_name=country, session_name=session_name)
        if not sessions:
            return PitStopsResponse(
                session_name=session_name,
                year=year,
                country=country,
                pit_stops=[],
                total_pit_stops=0
            )
    
        session = sessions[0]
        session_key = session['session_key']
    
        # Get pit stop data
        pit_data = openf1_client.get_pit_stops(
            session_key=session_key,
            driver_number=driver_number
        )
    
        # Convert to Pydantic models
        pit_stops = [
            PitStopData(
                date=stop['date'],
                driver_number=stop['driver_number'],
                lap_number=stop['lap_number'],
                pit_duration=stop['pit_duration'],
                session_key=stop['session_key'],
                meeting_key=stop['meeting_key']
            )
            for stop in pit_data
        ]
    
        # Calculate statistics
        fastest_stop = None
        slowest_stop = None
        average_duration = None
    
        if pit_stops:
            durations = [stop.pit_duration for stop in pit_stops]
            fastest_stop = min(durations)
            slowest_stop = max(durations)
            average_duration = sum(durations) / len(durations)
    
        return PitStopsResponse(
            session_name=session_name,
            year=year,
            country=country,
            pit_stops=pit_stops,
            total_pit_stops=len(pit_stops),
            fastest_stop=fastest_stop,
            slowest_stop=slowest_stop,
            average_duration=average_duration
        )
  • Pydantic models PitStopData (individual pit stop details) and PitStopsResponse (aggregated response with list of pit stops and statistics) used for input/output validation in get_live_pit_stops.
    class PitStopData(BaseModel):
        """Pit stop data."""
        date: str = Field(..., description="Timestamp of pit stop")
        driver_number: int = Field(..., description="Driver number (1-99)")
        lap_number: int = Field(..., description="Lap number of pit stop")
        pit_duration: float = Field(..., description="Duration of pit stop in seconds")
        session_key: int = Field(..., description="Session identifier")
        meeting_key: int = Field(..., description="Meeting identifier")
    
    
    class PitStopsResponse(BaseModel):
        """Response for pit stop data."""
        session_name: Optional[str] = Field(None, description="Session name")
        year: Optional[int] = Field(None, description="Year")
        country: Optional[str] = Field(None, description="Country name")
        pit_stops: list[PitStopData] = Field(..., description="List of pit stops")
        total_pit_stops: int = Field(..., description="Total number of pit stops")
        fastest_stop: Optional[float] = Field(None, description="Fastest pit stop duration")
        slowest_stop: Optional[float] = Field(None, description="Slowest pit stop duration")
        average_duration: Optional[float] = Field(None, description="Average pit stop duration")
  • server.py:169-169 (registration)
    MCP server registration of the get_live_pit_stops tool using the mcp.tool() decorator.
    mcp.tool()(get_live_pit_stops)
  • Re-export of get_live_pit_stops from pit_stops.py module in tools/live package for easier imports.
    from .pit_stops import get_live_pit_stops
    from .intervals import get_live_intervals
    from .meetings import get_meeting_info
    from .stints import get_stints_live
    
    __all__ = [
        "get_driver_radio",
        "get_live_pit_stops",
  • Re-export of get_live_pit_stops from tools.live subpackage in main tools package.
    from .live import get_driver_radio, get_live_pit_stops, get_live_intervals, get_meeting_info, get_stints_live
    
    __all__ = [
        # Session
        "get_session_details",
        "get_session_results",
        "get_laps",
        "get_session_drivers",
        "get_tire_strategy",
        "get_qualifying_sessions",
        "get_track_evolution",
        # Telemetry
        "get_lap_telemetry",
        "compare_driver_telemetry",
        # Weather
        "get_session_weather",
        # Control
        "get_race_control_messages",
        # Standings
        "get_standings",
        # Media
        "get_f1_news",
        # Schedule
        "get_schedule",
        # Reference
        "get_reference_data",
        # Track
        "get_circuit",
        # Analysis
        "get_analysis",
        # Live (OpenF1)
        "get_driver_radio",
        "get_live_pit_stops",
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the tool returns 'PitStopsResponse with pit stop durations and statistics' and mentions data availability constraints ('2023+, OpenF1 data availability'), but lacks details on rate limits, authentication needs, error conditions, or pagination behavior for a data-fetching tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and well-structured with clear sections (purpose, Args, Returns, Example). Every sentence adds value, though the example section could be slightly more concise. The information is front-loaded with the core purpose stated first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (Returns mentions 'PitStopsResponse'), the description doesn't need to detail return values. It covers purpose, parameters, and usage examples adequately for a data retrieval tool. However, without annotations, it could better address behavioral aspects like data freshness or API limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate fully. It successfully adds meaning for all 4 parameters: explains 'year' constraints, provides 'country' examples, clarifies 'session_name' options with default, and describes 'driver_number' filtering purpose. The Args section comprehensively documents parameter semantics beyond basic schema titles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get pit stop analysis with crew timing from OpenF1.' It specifies the verb ('Get'), resource ('pit stop analysis'), and data source ('OpenF1'), distinguishing it from siblings like get_laps or get_stints_live which focus on different race data aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage through examples showing when to use optional parameters, but it doesn't explicitly state when NOT to use this tool or name alternatives among siblings. The examples illustrate filtering by driver number vs. getting all stops, offering practical guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/praneethravuri/pitstop'

If you have feedback or need assistance with the MCP directory API, please join our Discord server