Server Details
F1 MCP — Formula 1 data via the Ergast API
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-f1
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 9 of 9 tools scored. Lowest: 2.9/5.
The F1-specific tools (get_current_standings, get_driver, get_race_results, get_schedule) are clearly distinct. However, the general utility tools like ask_pipeworx and discover_tools overlap with the F1 tools since ask_pipeworx can also answer F1 questions, creating ambiguity about which tool to use for F1 queries.
The F1 tools follow a consistent verb_noun pattern (get_current_standings, get_driver, get_race_results, get_schedule). The memory tools (forget, recall, remember) also follow a verb pattern. However, ask_pipeworx and discover_tools break the pattern slightly (ask_, discover_ instead of get_ or similar).
9 tools is a reasonable number for a server that combines F1 data access with general memory and tool discovery. It feels slightly mixed but not excessive. The F1 subset (4 tools) is well-scoped, and the additional utilities add value without overwhelming.
For F1, the tools cover standings, driver info, race results, and schedule, but miss qualifying results, constructor standings, or season archive. The memory tools (remember/recall/forget) are basic but functional. The discover_tools and ask_pipeworx suggest a larger catalog not fully exposed, so completeness is partial.
Available Tools
9 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses key behavioral traits: it returns the result from the best available data source, and it automatically selects the appropriate tool and fills arguments. This goes beyond what annotations (none provided) would reveal. However, it does not specify what happens if the question cannot be answered, or if multiple sources are available, but given the lack of annotations, the description is quite transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with three sentences and three examples. It front-loads the core action and ends with examples. No unnecessary words. However, the examples are somewhat lengthy, but they serve a useful purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema, no annotations), the description is nearly complete. It explains what the tool does, how to use it, and what to expect. The only minor gap is that it does not explain what happens if the question cannot be answered, but that is acceptable for a query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'question', and the description provides additional context by stating it should be in 'plain English' and giving examples of natural language requests. This adds meaning beyond the schema's generic 'Your question or request in natural language' description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: ask a question in plain English and get an answer from the best available data source. It explicitly distinguishes itself from sibling tools by saying it 'picks the right tool, fills the arguments, and returns the result,' which implies it abstracts away the need to directly use other tools like get_driver or get_race_results. The verb 'ask' and resource 'answer' are specific and action-oriented.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'No need to browse tools or learn schemas — just describe what you need,' which guides the agent to use this tool when the user provides a natural language question without specifying which sibling tool to use. It also provides three concrete examples that illustrate typical usage scenarios, covering data lookups across different domains.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It states the tool searches and returns tools with names and descriptions, but does not disclose behavior such as whether it performs semantic search, how results are ordered, or any limitations (e.g., indexing delays). Adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, front-loaded with purpose, includes example usage and explicit directive to use first. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description explains return value (tools with names and descriptions). With 2 simple parameters and no nested objects, the description is complete enough for an agent to understand what it does and how to use it. Missing details about result ordering or filtering capabilities, but sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds value by explaining the 'query' parameter with example natural language descriptions, and clarifies 'limit' default and max. This goes beyond the schema's minimal descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool searches the tool catalog by description, returns relevant tools with names and descriptions, and is to be called first when needing to find the right tools among many. Verb 'search' and resource 'Pipeworx tool catalog' are specific, and it distinguishes from siblings by positioning as the discovery tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' Implies alternative is to not use it when tools are few or already known. Provides clear context for its usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states deletion but does not disclose whether the action is reversible, requires confirmation, or has side effects. The description is minimal and does not add behavioral context beyond the verb 'Delete'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words. It is front-loaded with the action. However, it could be slightly expanded to include usage guidance without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is a simple deletion with one parameter and no output schema or annotations, the description is barely adequate. It does not explain what happens after deletion (e.g., success response, error on missing key). Completeness is low.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (1 parameter, described). The description adds no additional meaning beyond the schema's parameter description. Baseline of 3 is appropriate since the schema already documents the parameter adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action (Delete) and the resource (a stored memory by key). It is clear and concise, though it could be more specific about what kind of memory (e.g., user-generated or system). Among sibling tools, 'forget' is distinct from 'recall' and 'remember', which are likely retrieval and storage operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide guidance on when to use this tool versus alternatives like 'recall' or 'remember'. It lacks context on prerequisites (e.g., key must exist) or consequences (e.g., irreversible). No mention of when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_current_standingsAInspect
Check current F1 driver championship standings. Returns position, points, wins, driver name, and constructor for all drivers.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full burden. It explicitly states it returns current standings with specified fields, which is sufficient. However, it does not disclose if the data is cached, how often it updates, or any potential latency. Still, for a read-only tool, this is reasonably transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that front-loads the core purpose and lists returned data. Every word is informative with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description fully specifies what the tool does and what it returns. It is complete for its complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has no parameters (schema coverage 100%), so the description does not need to elaborate. It adds value by listing the returned fields (position, points, wins, driver, constructor), which compensates for the lack of output schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns current F1 driver championship standings, specifying the data fields (position, points, wins, driver name, constructor). This verb+resource combination is unambiguous and distinguishes it from sibling tools like get_driver or get_race_results.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for retrieving current standings, but does not explicitly state when to use it versus alternatives like get_driver or get_race_results. However, the context is clear enough for an agent to infer its use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_driverAInspect
Look up F1 driver profile by ID. Returns name, car number, nationality, and date of birth.
| Name | Required | Description | Default |
|---|---|---|---|
| driverId | Yes | Ergast driver ID (e.g., "hamilton", "verstappen", "leclerc") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It explains the output (name, number, nationality, date of birth) but does not disclose any side effects, authorization needs, or limitations (e.g., whether it always returns data or can fail). This is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: two sentences that front-load the purpose and immediately list the returned fields. Every word adds value, with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, simple retrieval) and no output schema, the description adequately covers what the tool returns. It could optionally mention that it expects a valid driver ID, but overall it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (the single parameter driverId is well-described with examples). The description adds no further parameter meaning beyond what the schema already provides, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves profile information for an F1 driver using their Ergast driver ID, listing specific fields (name, number, nationality, date of birth). It distinguishes itself from sibling tools like get_race_results and get_current_standings, which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (to get driver profile info by ID) and implicitly distinguishes it from sibling tools. However, it does not explicitly state when not to use it or mention alternatives, leaving room for improvement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_race_resultsAInspect
Get race results for a specific F1 grand prix. Provide season year and round number (e.g., 2024, round 5). Returns finishing position, driver, constructor, status, and points.
| Name | Required | Description | Default |
|---|---|---|---|
| round | Yes | Round number within the season (e.g., "1") | |
| season | Yes | Season year (e.g., "2025") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It accurately describes a read-only operation (no side effects), but doesn't disclose any potential limits, caching behavior, or data freshness. The description is truthful but could add more behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the main purpose, no wasted words. Efficiently conveys the tool's function and output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 required string params, no output schema), the description is nearly complete. It states what it returns, but could optionally mention the data format or ordering.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and descriptions already explain the two parameters well. The description adds no additional meaning beyond what the schema provides, which meets the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves finishing results for a specific F1 race using season and round, and lists the returned fields (position, driver, constructor, status, points). This specific verb+resource combination distinguishes it from siblings like get_current_standings or get_schedule.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for getting results by season/round but provides no explicit guidance on when to use this versus alternatives (e.g., get_current_standings). No exclusion criteria or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_scheduleAInspect
Get the F1 season race calendar. Provide year (e.g., 2024). Returns all rounds with race name, circuit, location, and date.
| Name | Required | Description | Default |
|---|---|---|---|
| season | Yes | Season year (e.g., "2025") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description bears full burden. It correctly identifies the tool as a read operation (no destructive hint) and lists the returned data fields, which is transparent. However, it does not mention pagination, response format, or if the data is cached.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of appropriate length, front-loading the core purpose. Every word contributes meaning, with no filler or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter, no output schema, and no annotations, the description adequately covers the basics: what it returns and what input is needed. It could be slightly improved by mentioning if the data is for a specific season range or if it includes future races.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (season parameter is described). The description adds no extra meaning beyond the schema: it restates that season is a year string. It does not elaborate on accepted formats (e.g., '2025' vs '25') or range limitations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a full race calendar/schedule for an F1 season, listing specific fields (round number, race name, circuit, location, date). It uses a specific verb ('Get') and resource ('full race calendar/schedule'), and distinguishes from siblings like get_race_results and get_current_standings by focusing on schedule data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for retrieving the schedule for a given season, but does not explicitly state when to use this tool versus alternatives like get_race_results (which likely returns results for a specific race) or get_current_standings (standings). It provides no guidance on season selection (e.g., only historical seasons? future?) or constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It states the tool retrieves or lists memories but does not disclose behavioral traits like whether memory persists across sessions, or if there are limits on key length or number of memories. Since annotations are absent, more detail would be helpful.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. First sentence states core function, second provides usage guidance. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description does not need to explain return values. It adequately explains behavior for a simple key-value retrieval. However, it could mention that keys are case-sensitive or note if listing all memories returns keys only or values too. Still, it is complete enough for most uses.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds context that omitting the key lists all memories, which is already implied by the schema (key not required) but clarified. No additional parameter meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a memory by key or lists all memories, which distinguishes it from sibling tools like 'remember' (store) and 'forget' (delete). It explicitly specifies the resource (stored memory) and action (retrieve/list).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells the agent when to use the tool: 'to retrieve context you saved earlier...' and implies when to omit key to list all. It differentiates from siblings: 'remember' and 'forget' are separate tools for different operations. No exclusion is needed as the usage is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses memory persistence behavior (authenticated users get persistent, anonymous sessions last 24 hours). No annotations provided, so description carries full burden; covers key behavioral traits without contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, concise and front-loaded with core purpose, then usage guidance, then behavioral detail. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (2 required params, no output schema), the description adequately covers purpose, usage, and behavior. Lacks mention of return value, but return is likely trivial for a store operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description does not add additional meaning beyond the schema's parameter descriptions, which already provide examples and context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly specifies verb 'store' and resource 'key-value pair in session memory', distinguishing it from sibling tools like recall (retrieve) and forget (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (save intermediate findings, user preferences, context) and hints at persistence differences between authenticated and anonymous sessions. Does not explicitly mention when not to use or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!