guif
Server Details
GBIF MCP — wraps the Global Biodiversity Information Facility API v1 (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-gbif
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 8 of 8 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes, such as ask_pipeworx for general queries, discover_tools for tool discovery, and GBIF-related tools for species data. However, forget and recall/remember are memory operations that could overlap conceptually, though their specific functions (delete vs. retrieve/store) are clear from descriptions.
The naming is mixed with no consistent pattern: ask_pipeworx uses a verb-object style, discover_tools is verb-noun, forget is a single verb, get_occurrences and get_species are verb-noun, recall and remember are single verbs, and search_species is verb-noun. While readable, the lack of a uniform convention reduces predictability.
With 8 tools, the count is reasonable for a server that combines general querying, tool discovery, memory management, and GBIF data access. It's slightly broad in scope but manageable, as each tool serves a specific function without obvious redundancy.
The tool set covers general querying, tool discovery, memory operations, and GBIF species/occurrence data, but there are gaps. For example, in the GBIF domain, tools for updating or deleting data are missing, and the memory tools lack advanced features like bulk operations. The surface is functional but not fully comprehensive for the implied domains.
Available Tools
8 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains that Pipeworx 'picks the right tool, fills the arguments, and returns the result,' which covers the automation aspect. However, it lacks details on error handling, rate limits, authentication needs, or what happens if no data source is found. The description adds some context but is incomplete for a tool that abstracts complex operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core functionality in the first sentence, followed by explanatory details and examples. Every sentence earns its place by clarifying usage and providing concrete instances, with no wasted words. It is appropriately sized for a tool with a simple parameter set.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (abstracting multiple data sources) and lack of annotations or output schema, the description is somewhat incomplete. It explains the high-level behavior but omits details on response formats, error conditions, or limitations. The examples help, but more context on what 'best available data source' means would improve completeness for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'question' parameter documented as 'Your question or request in natural language.' The description reinforces this by stating 'Ask a question in plain English' and providing examples, adding practical meaning beyond the schema. With only one parameter, the baseline is high, and the description effectively complements the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'), distinguishing it from sibling tools that appear to be more specific (e.g., get_species, search_species).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It provides clear alternatives (implicitly suggesting not to use sibling tools for general queries) and includes examples like 'What is the US trade deficit with China?' to illustrate appropriate use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the search functionality and return format ('most relevant tools with names and descriptions'), but lacks details on error handling, rate limits, or authentication requirements. It adds some value but leaves behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first explains the core functionality, and the second provides critical usage guidance. Every word serves a purpose, with no redundancy or unnecessary elaboration, making it highly readable and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search functionality with 2 parameters) and lack of annotations or output schema, the description does well by explaining the purpose, usage context, and return format. However, it could benefit from more behavioral details like error cases or result limitations to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description mentions the query parameter implicitly ('by describing what you need') but adds no meaningful semantic context beyond what the schema provides. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('search', 'returns') and identifies the resource ('Pipeworx tool catalog'). It distinguishes from sibling tools by focusing on tool discovery rather than data retrieval, making its role explicit and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Call this FIRST when you have 500+ tools available and need to find the right ones for your task'), including a specific condition (500+ tools) and a clear alternative scenario (using it first). This gives strong contextual direction for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a deletion operation, implying it's destructive, but doesn't specify whether deletions are permanent, reversible, or require specific permissions. For a mutation tool with zero annotation coverage, this is a significant gap in safety and operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's function without any wasted words. It's appropriately sized for a simple tool with one parameter and is front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description is incomplete. It doesn't explain what happens after deletion (e.g., confirmation, error if key doesn't exist), return values, or error conditions. Given the mutation nature and lack of structured safety hints, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'key' fully documented in the schema as 'Memory key to delete'. The description adds no additional meaning beyond what the schema provides, such as format examples or constraints. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and the resource ('a stored memory by key'), making the purpose immediately understandable. However, it doesn't differentiate this tool from potential siblings like 'recall' or 'remember' which might also manipulate memories, leaving room for improvement in sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'recall' or 'remember'. It doesn't specify prerequisites (e.g., needing an existing memory key) or exclusions, leaving the agent to infer usage context from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_occurrencesAInspect
Get georeferenced observation records for a species with coordinates, dates, and sources. Filter by country code (e.g., 'US', 'BR', 'AU') to narrow results geographically.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | GBIF taxon key (integer) | |
| limit | No | Maximum records to return (1-300, default 20) | |
| country | No | ISO 3166-1 alpha-2 country code to filter occurrences (e.g., "US", "DE") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the tool's behavior as a retrieval operation with optional filtering, but it lacks details on rate limits, authentication needs, error handling, or response format. For a tool with no annotations, this is a moderate gap in behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence and adds a concise optional feature in the second. It uses no wasted words, is appropriately sized for the tool's complexity, and every sentence earns its place by providing essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the purpose and basic usage but lacks details on behavioral traits like response structure or limitations. Without annotations or output schema, more context would be helpful for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema fully documents the parameters (key, limit, country). The description adds minimal value beyond the schema by mentioning the optional country filter, but it does not provide additional syntax or format details. Baseline 3 is appropriate as the schema handles most of the parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Retrieve') and resource ('georeferenced occurrence records for a taxon'), specifying the data type and scope. It distinguishes from sibling tools like 'get_species' and 'search_species' by focusing on occurrence records rather than species information, making the purpose specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use the tool ('Retrieve georeferenced occurrence records for a taxon') and includes an optional filter ('Optionally filter by ISO 3166-1 alpha-2 country code'), but it does not explicitly state when not to use it or name alternatives like the sibling tools, leaving some guidance implicit rather than fully explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_speciesAInspect
Get complete taxonomic classification for a species (kingdom through subspecies). Requires taxon key from search_species. Returns all ranks and accepted name status.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | GBIF taxon key (integer) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the tool retrieves 'full taxonomic details,' which implies a read-only operation, but lacks details on rate limits, error handling, or response format. The description adds some behavioral context but is incomplete for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the main purpose and followed by usage guidance. Every sentence earns its place with no wasted words, making it highly efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is mostly complete. It covers purpose and usage well but lacks details on behavioral aspects like response format or error handling, which would be beneficial for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the 'key' parameter. The description adds minimal value by specifying 'integer taxon key,' which is already in the schema. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get full taxonomic details') and resource ('GBIF species'), and distinguishes it from sibling tools by mentioning 'search_species' for finding keys. It precisely communicates what the tool does without being vague or tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('by its integer taxon key') and when to use an alternative ('Use search_species first to find the key'). It clearly differentiates this tool from its sibling, offering complete usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It describes the tool's behavior (retrieval/listing of stored memories) and persistence across sessions, but doesn't disclose important behavioral traits like authentication needs, rate limits, error conditions, or what happens if a key doesn't exist. The description adds some context but leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place. The first sentence explains the dual functionality, and the second provides usage context. No wasted words, and information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description should do more to explain what 'retrieve' actually returns (memory content format, structure of list results). While it covers basic purpose and usage well, for a memory retrieval tool with persistence across sessions, more behavioral detail would be helpful for an AI agent to use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single parameter. The description adds meaningful context by explaining the semantic effect of omitting the key ('list all stored memories') and relating it to the tool's purpose ('retrieve context you saved earlier'). This goes beyond the schema's technical description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from siblings by mentioning 'context you saved earlier' which implies a memory system distinct from tools like 'search_species' or 'get_occurrences'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('to retrieve context you saved earlier') and provides clear alternatives: retrieve by key or list all by omitting the key. It also distinguishes usage context ('in the session or in previous sessions'), though it doesn't explicitly say when NOT to use it versus other sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a write operation ('store'), specifies persistence differences (authenticated vs. anonymous sessions with 24-hour limit), and implies session-scoped memory. It doesn't cover error cases or rate limits, but adds substantial context beyond basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by usage context and behavioral details. Every sentence adds value (e.g., use cases, persistence rules) with zero wasted words, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool with no annotations and no output schema, the description is largely complete: it explains the tool's function, usage context, and key behavioral aspects like persistence. It could improve by mentioning return values or error handling, but it adequately covers the essentials given the tool's moderate complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (key and value) with examples. The description adds no additional parameter-specific information beyond what's in the schema, such as constraints or usage tips, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('store a key-value pair') and resource ('in your session memory'), distinguishing it from siblings like 'recall' (retrieval) and 'forget' (deletion). It explicitly mentions use cases like saving intermediate findings, user preferences, or context across tool calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool (e.g., for saving data across tool calls) and distinguishes it by function from siblings like 'recall' for retrieval. However, it lacks explicit exclusions or alternatives, such as when not to use it versus other storage mechanisms.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_speciesBInspect
Search for species by common or scientific name. Returns matched taxa with rank, classification status, and taxonomic hierarchy. Use get_species with the taxon key for full details.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum results to return (1-100, default 20) | |
| query | Yes | Species name or keyword (e.g., "Homo sapiens", "oak") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions what the tool returns ('matched taxa with rank, status, and classification'), which is helpful, but lacks details on error handling, rate limits, authentication needs, or whether this is a read-only operation. For a search tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose, search method, and return values. It is front-loaded with key information and avoids unnecessary words, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search with two parameters) and lack of annotations or output schema, the description is minimally adequate. It covers the basic purpose and return format but omits important contextual details like error conditions, pagination, or how results are ordered. Without an output schema, the agent must rely on the description's brief mention of return values, which is insufficient for full understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, clearly documenting both parameters ('query' and 'limit') with their purposes and constraints. The description adds minimal value beyond the schema, only implying the search scope ('GBIF species backbone') without providing additional syntax or format details. This meets the baseline score of 3 when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Search') and resource ('GBIF species backbone'), and mentions the search criteria ('by name or keyword'). It doesn't explicitly differentiate from sibling tools like 'get_species', but the search focus is clear. The description avoids tautology by providing meaningful context beyond just the tool name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_species' or 'get_occurrences'. It mentions the search functionality but doesn't specify scenarios where this tool is preferred or excluded, leaving the agent to infer usage based on the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!