Skip to main content
Glama
Surya96t

fastf1-mcp-server

compare_telemetry

Compare Formula 1 telemetry data between two drivers in the same session to analyze performance differences in speed, lap times, and sector performance.

Instructions

Compare telemetry between two drivers on the same session.

Data source: FastF1 Live Timing Coverage: 2018-present

Args: year: Season year (2018+) event: Race name or round number session: Session type (R, Q, S, FP1, FP2, FP3) driver1: First driver code (e.g., "VER") driver2: Second driver code (e.g., "LEC") lap: Lap number or "fastest" — applied independently to each driver sample_size: Telemetry points per driver (default 200, max 500)

Returns: { "driver1": {"code": "VER", "lapNumber": 18, "lapTime": "1:10.123"}, "driver2": {"code": "LEC", "lapNumber": 20, "lapTime": "1:10.456"}, "comparison": [ {"distance": 0.0, "speed1": 280.0, "speed2": 275.0, "speedDelta": 5.0, "timeDelta": 0.0}, ... ], "summary": { "lapTimeDeltaSec": 0.333, "maxSpeedDelta": 8.2, "sectors": { "S1": {"driver1": "0:00:28.123", "driver2": "0:00:28.456", "deltaSec": -0.333}, "S2": {...}, "S3": {...} } } }

Example: compare_telemetry(2024, "Monaco", "Q", "VER", "LEC")

Note: timeDelta is the cumulative time gap at each distance point, computed from speed integration. Positive = driver1 is ahead. Comparison is aligned to driver1's distance axis.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
yearYes
eventYes
sessionYes
driver1Yes
driver2Yes
lapNofastest
sample_sizeNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: data source (FastF1 Live Timing), coverage (2018-present), default values (lap='fastest', sample_size=200), max limits (sample_size max 500), and how the comparison is computed (aligned to driver1's distance axis with timeDelta explained). It lacks details on error conditions or rate limits, but covers most essential operational aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and efficiently organized: purpose statement, data source/coverage, parameter explanations, return structure, example, and technical notes. Every section adds value with zero wasted text. The front-loaded purpose statement immediately communicates the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, telemetry comparison logic) and the presence of an output schema, the description provides excellent contextual completeness. It explains the comparison methodology, data alignment, timeDelta calculation, and includes a detailed return example. The output schema handles the return structure documentation, allowing the description to focus on operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must fully compensate. It provides detailed semantic explanations for all 7 parameters: year (season year 2018+), event (race name or round number), session (session type with examples), driver1/driver2 (driver codes with examples), lap (lap number or 'fastest' applied independently), and sample_size (telemetry points per driver with default and max). This goes well beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compare telemetry between two drivers on the same session.' It specifies the verb ('compare'), resource ('telemetry'), and scope ('two drivers on the same session'), distinguishing it from sibling tools like get_lap_telemetry (single driver) or get_fastest_laps (no telemetry comparison).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: comparing two drivers' telemetry from the same session, with data from FastF1 Live Timing for 2018 onward. However, it does not explicitly state when NOT to use it or name alternatives (e.g., get_lap_telemetry for single-driver data), which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Surya96t/fastf1-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server