f1-2026
Server Details
Formula 1 2026 schedule MCP — 24 races, drivers, constructors, sessions, circuits, race times.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 5 of 5 tools scored. Lowest: 3.7/5.
Each tool targets a distinct data entity: drivers, teams, race schedule, next race, and a specific race by slug/round. There is no ambiguity or overlap in their purposes.
All tools follow the consistent 'whensport_f1_get[Noun]' pattern, using snake_case and a uniform verb+noun structure. The naming is predictable and easy to understand.
With 5 tools, the server is efficiently scoped for an F1 information provider. Each tool covers a core concept (drivers, teams, races) without unnecessary bloat or deficiency.
The tool set covers the essential F1 data: drivers, teams, race calendar, and next race. However, it lacks some details like qualifying results or driver statistics, and the description notes that race results are still being ingested, which may create minor gaps.
Available Tools
5 toolswhensport_f1_getDriversGet the Formula 1 driver rosterARead-onlyInspect
Get the F1 driver list (name, team, number, country).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, and the description adds value by specifying the output fields. No contradictions or missing behavioral details for this simple read-only tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One concise sentence with no extraneous content. Front-loaded with the core action and details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with no parameters, the description is complete. It lists the fields returned, and the sibling tools provide context for differentiation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so baseline 4 applies. The description adds no parameter info, but none is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves the F1 driver list and specifies the fields returned (name, team, number, country). It is distinct from sibling tools for races and teams.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives, but the purpose and distinct sibling names imply context. A higher score would require explicit when/when-not statements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whensport_f1_getNextRaceGet the next upcoming Formula 1 raceARead-onlyInspect
Get the next upcoming F1 race relative to today.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true and openWorldHint=false, indicating a safe read. The description adds 'relative to today' but no further behavioral traits like caching or date sensitivity. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of nine words, front-loaded with the action and resource, with no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless tool, the description is minimally adequate. However, without an output schema, the agent may need to guess the return format (e.g., a single race object with fields). Sibling tools provide some context, but the description could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters and 100% coverage, so the description need not add parameter info. The baseline is 4, and it meets that.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'next upcoming F1 race', with the qualifier 'relative to today' to distinguish from other race-related tools like whensport_f1_getRace or whensport_f1_getRaces.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (next upcoming race) but does not explicitly state when not to use this tool or mention alternatives such as whensport_f1_getRaces for a list of races.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whensport_f1_getRaceGet a single Formula 1 race by slug or roundARead-onlyInspect
Get a single F1 race by slug (e.g. 'miami', 'monaco', 'great-britain', 'abu-dhabi') or by round number. Slugs are country/host names — Silverstone's race is 'great-britain', not 'silverstone' (silverstone is the venueSlug). Cancelled races are also queryable: 'bahrain' and 'saudi-arabia' return status="cancelled" with cancellationReason set.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | No | Race slug — country/host name, bare, no year suffix (e.g. monaco, great-britain, abu-dhabi). Cancelled-race slugs (bahrain, saudi-arabia) also resolve. | |
| round | No | Race round number 1-22 (rounds are renumbered after the 2026 cancellations; cancelled races have no current round but expose originalRound for reference). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations set readOnlyHint=true, confirming read-only. Description adds that cancelled races return status='cancelled' with cancellationReason, and explains slug conventions. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, each providing essential information. Front-loaded with key purpose and parameters. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers the tool's behavior and edge cases (cancelled races). However, without an output schema, the description could mention return fields or error handling to be fully self-contained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds significant value: examples of slug values, clarification that slugs are country/host names, and details about round renumbering and cancelled race originalRound. This enriches parameter understanding beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it gets a single F1 race by slug or round. The verb 'Get' and resource 'single F1 race' are specific, and it distinguishes from sibling tools like 'getRaces' (list) and 'getNextRace'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explains when to use slug vs round, and notes slug naming (country/host, not venue). Mentions cancelled races are queryable. However, it does not explicitly state when to prefer this over siblings like 'getRaces'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whensport_f1_getRacesGet the Formula 1 race calendarARead-onlyInspect
Get the F1 race calendar — every grand prix with date, circuit, round, sprint flag, and local kick-off in IANA timezone. Cancelled races (e.g. Bahrain, Saudi Arabia) are included with status="cancelled" and a cancellationReason; their date/round fields are empty since the events did not take place. Use upcomingOnly to filter to forthcoming active races. Note: this MCP is schedule-focused; result (podium/winner) on finished races is populated as ingestion catches up — consumers should treat null as "not yet ingested" and consult fia.com / formula1.com for confirmed results.
| Name | Required | Description | Default |
|---|---|---|---|
| upcomingOnly | No | If true, return only races that have not yet happened. Cancelled races are excluded from this filter. Default false. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (readOnlyHint, openWorldHint), description discloses that cancelled races are included with empty date/round fields, and that results may be null until ingestion catches up. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each earning its place. First sentence states what the tool returns; second explains edge cases (cancelled races) and data freshness. Efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given full schema coverage and no output schema, the description sufficiently explains the data model (cancelled races, null results). It also hints at external result verification, covering the tool's context well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already describes the `upcomingOnly` parameter (100% coverage). Description adds that cancelled races are excluded from this filter, adding value beyond schema. Baseline 3, plus extra context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves the F1 race calendar with specific details (date, circuit, round, sprint flag, local kick-off). The title and description distinguish it from sibling tools like getDrivers, getNextRace, getRace, and getTeams.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly mentions using the `upcomingOnly` parameter to filter to forthcoming races. Notes that cancelled races are included with status and reason. Could be clearer about when to use alternatives like getNextRace, but provides guidance on result data freshness.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whensport_f1_getTeamsGet the Formula 1 constructor listARead-onlyInspect
Get the F1 team list (constructors).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, so the safety profile is clear. The description adds no additional behavioral details (e.g., no mention of return format or lack of filters). The description is adequate but not enriched beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no unnecessary words. It efficiently conveys the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should specify what data is returned. It mentions 'team list (constructors)' but does not describe fields or structure. For a simple list, it is adequate but not fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters; schema coverage is 100%. The description does not need to explain parameters. The baseline for 0-param tools is 4, and the description is sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('F1 team list (constructors)'). It distinguishes itself from sibling tools (drivers, races) by specifying teams.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool vs alternatives (e.g., getDrivers, getRaces). There is no mention of context, prerequisites, or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!