Zendesk
Server Details
Zendesk MCP Pack — tickets, users, organizations via OAuth.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-zendesk
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 10 of 10 tools scored. Lowest: 2.4/5.
The Pipeworx tools (ask_pipeworx, discover_tools, forget, recall, remember) are general-purpose and not specific to Zendesk, creating ambiguity about when to use them vs. the Zendesk-specific tools. The Zendesk tools are well-distinguished, but the set as a whole mixes two domains without clear integration.
Zendesk tools use a consistent 'zd_verb_noun' pattern, while Pipeworx tools use short imperative verbs (ask, discover, forget, recall, remember) with no common prefix. This mixed convention reduces overall consistency.
10 tools is a reasonable count for a server that appears to integrate a general memory/query system with a specific CRM (Zendesk). However, the Zendesk subset is minimal (5 tools), which may feel slightly under-scoped for a full ticketing system.
The Zendesk tools cover only basic read operations (get ticket, get user, list tickets/users, search tickets). Missing critical write operations like creating or updating tickets/users, making the Zendesk integration incomplete. The Pipeworx memory tools are self-contained but unrelated.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that the tool automatically picks the right tool and fills arguments, which is a key behavioral trait. With no annotations provided, the description carries the burden and handles it well by explaining its decision-making approach. It doesn't mention limits like query complexity or data recency, but overall it's transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (three sentences) and front-loaded with the core action. Every sentence adds value: the first states the purpose, the second explains the mechanism, and the third provides examples. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (one required parameter, no output schema, no nested objects), the description is complete. It explains how the tool works, what it does, and provides examples. There are no gaps given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'question', which is already described as 'Your question or request in natural language'. The description adds no additional meaning beyond the schema, so a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to answer natural language questions by selecting the best data source, filling arguments, and returning results. It explicitly distinguishes itself from sibling tools that require manual tool selection and schema learning, with examples illustrating its use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear guidance on when to use the tool: for asking questions in plain English without needing to browse tools or learn schemas. It also gives examples of appropriate queries. However, it does not explicitly state when not to use it or mention alternative tools for specific cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It clearly states the tool's behavior: it searches and returns relevant tools with names and descriptions. It does not mention any destructive side effects, auth requirements, or performance characteristics, but given the search/read nature, the description is adequate. However, it doesn't mention that it uses natural language querying or that it returns results ordered by relevance, which would be helpful.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (two sentences) with no filler. The first sentence states purpose, the second provides usage context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, no output schema, no nested objects), the description covers all essential aspects: what it does, when to use it, and the nature of its input. No output schema is needed as the return is described ('tools with names and descriptions').
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so baseline is 3. The description adds value by explaining that the query is a natural language description and gives concrete examples (e.g., 'analyze housing market trends'), which enriches the schema's description. The limit parameter is well-explained in the schema, and the description doesn't add much for it. Overall, the description complements the schema well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: searching a tool catalog by describing what you need, returning relevant tools with names and descriptions. It distinguishes itself from siblings by being the discovery/search tool among 500+ tools, whereas siblings like ask_pipeworx, remember, etc., serve different functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises to call this tool FIRST when needing to find the right tools among 500+ options. This provides clear when-to-use guidance and sets priority context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It states it deletes a memory, but doesn't mention permanence, side effects (e.g., cascade effects), error conditions (e.g., key not found), or access permissions. For a destructive operation, more transparency is needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 5 words. Extremely concise and front-loaded with the action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple delete operation with one required parameter and no output schema, the description is minimally adequate. However, it lacks context about return value, error handling, and idempotency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes the parameter. The description adds no additional semantics beyond 'by key', but the schema description is adequate for this simple case.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Delete), the target (a stored memory), and the identifier (by key). It succinctly distinguishes the tool from siblings like 'recall' and 'remember'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a specific memory needs to be deleted, but provides no guidance on when to use alternatives (e.g., recall for reading, remember for storing). No when-not-to-use or prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the tool retrieves or lists memories, which implies read-only behavior. However, it does not disclose whether this operation is always safe, if there are any side effects, or how the return format looks. A 3 is appropriate as it gives basic behavioral context but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences, front-loaded with the main purpose and a usage hint. Every sentence adds value with no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description could mention the return format or that it returns memory content or keys. However, for a simple retrieval tool with one optional parameter, the description is sufficiently complete for an agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by explaining that omitting the key lists all memories, which is not in the schema. This extra semantic information justifies a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a memory by key or lists all memories if key is omitted. The verb 'retrieve' and resource 'memory' are specific, and the dual behavior is explicitly described, distinguishing it from siblings like 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool ('to retrieve context you saved earlier') and notes the two modes (by key vs. list all). While it does not explicitly mention when not to use it or name alternatives, the context is clear enough for an agent to decide.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses behavioral traits: persistent memory for authenticated users, 24-hour session for anonymous. No annotations provided, so description carries full burden and does well. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, no wasted words. Front-loaded with the core action, then usage guidance, then behavioral notes. Every sentence serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and two simple parameters, description is complete enough. It covers purpose, usage, persistence details. Missing explicit mention of return value (likely success confirmation), but minimal for this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both 'key' and 'value'. Description adds context about what kinds of values to store (findings, addresses, etc.) and example keys. Beyond schema, it explains the persistence behavior tied to authentication, adding value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly describes storing a key-value pair in session memory, specifying verb ('store'), resource ('key-value pair in session memory'), and purpose. Distinguishes from sibling tools like 'forget' and 'recall' by its unique role.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States when to use: to save intermediate findings, user preferences, or context across tool calls. Provides context about persistence (authenticated vs anonymous). Does not explicitly say when not to use or mention alternatives, but usage is well implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zd_get_ticketBInspect
Get a Zendesk ticket by ID.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Ticket ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It does not mention return format, error behavior, rate limits, or authentication needs. 'Get' suggests read-only, but no explicit statement.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded verb+resource. Efficient but could mention sibling tools or when to use.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with one parameter, but no output schema. Description should at least hint at return structure or common fields. Incomplete for an agent to understand what it gets.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and schema describes 'id' as 'Ticket ID'. Description adds no extra meaning beyond schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('Get'), resource ('a Zendesk ticket'), and parameter ('by ID'). It distinguishes from siblings like zd_list_tickets and zd_search_tickets, which have different verbs or parameters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this vs siblings. It implies usage when you have a ticket ID, but does not mention alternatives for searching or listing tickets.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zd_get_userCInspect
Get a Zendesk user by ID.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | User ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does not mention whether the operation is read-only, what happens if the user doesn't exist, or any side effects. The description is too brief for a tool with no annotation support.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, short sentence that is front-loaded and to the point. It contains no fluff, but could include more context without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no output schema, the description should explain what the return value looks like or any edge cases. It does not, leaving the agent to infer the response format. The description is incomplete for a simple get operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the parameter 'id' is already well-documented in the schema. The description adds no additional semantics beyond 'by ID', which is already implied. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get a Zendesk user by ID', with a specific verb ('Get'), resource ('Zendesk user'), and identifier method ('by ID'). While it distinguishes from sibling tools (zd_get_ticket, zd_list_users), it does not explicitly differentiate itself from similar get operations, but the context makes it clear enough.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives like zd_search_tickets or zd_list_users. The description is minimal and does not indicate prerequisites, limitations, or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zd_list_ticketsCInspect
List recent Zendesk tickets.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number | |
| sort_by | No | Sort field (created_at, updated_at, priority, status) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry burden. It does not disclose pagination limits, rate limits, or that only recent tickets are listed. The term 'recent' is ambiguous.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Very short description (one sentence), which is concise but lacks necessary details. No wasted words, but incomplete for a list operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description should explain return format or pagination. It does not. Given parameter count and complexity, description is insufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. However, description does not explain meaning of parameters beyond schema, such as default values for sort_by or page size.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'List recent Zendesk tickets,' which is a clear verb and resource. However, it does not differentiate itself from sibling tools like 'zd_search_tickets' or 'zd_get_ticket', and 'recent' is vague.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like zd_search_tickets for filtering. No exclusions or context for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zd_list_usersCInspect
List Zendesk users.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It does not mention pagination limits, ordering, or whether it lists all users or only active ones. The behavior is under-specified for a list operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single short sentence, which is concise but arguably too terse. Could include more context without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (1 param, no output schema), the description is incomplete. It doesn't explain the response format, pagination behavior, or any default limits. More context is needed for a list operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter ('page') already described in the schema. The description adds no further meaning beyond the schema. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'Zendesk users', distinguishing it from sibling tools like zd_get_user (single user) and zd_search_tickets (search). It's concise and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like zd_search_users (if exists) or zd_get_user. The description is minimal and provides no context about filtering or pagination.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zd_search_ticketsBInspect
Search Zendesk tickets with a query string.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query (e.g., "status:open priority:high") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description is minimal. It does not disclose what the tool returns (e.g., list of tickets or count), any pagination behavior, rate limits, or authentication requirements. The tool performs a search operation, but no behavioral traits are explained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that gets straight to the point. It is appropriately short for a tool with a single parameter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple parameter set and no output schema, the description is minimally adequate. However, for a search tool, users might benefit from knowing the expected output format or any limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage and describes the query parameter with an example. The description adds no additional semantics beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search' and the resource 'Zendesk tickets', and mentions the use of a query string. However, it could better distinguish from sibling tools like zd_list_tickets which also lists tickets, though that tool likely does not support search queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching tickets with a query string but does not provide explicit guidance on when to use this vs alternatives like zd_list_tickets. It could mention that zd_list_tickets is for basic listing without search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!