Skip to main content
Glama

Server Details

Zendesk MCP Pack — tickets, users, organizations via OAuth.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-zendesk
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 10 of 10 tools scored. Lowest: 2.4/5.

Server CoherenceC
Disambiguation2/5

The Pipeworx tools (ask_pipeworx, discover_tools, forget, recall, remember) are general-purpose and not specific to Zendesk, creating ambiguity about when to use them vs. the Zendesk-specific tools. The Zendesk tools are well-distinguished, but the set as a whole mixes two domains without clear integration.

Naming Consistency3/5

Zendesk tools use a consistent 'zd_verb_noun' pattern, while Pipeworx tools use short imperative verbs (ask, discover, forget, recall, remember) with no common prefix. This mixed convention reduces overall consistency.

Tool Count4/5

10 tools is a reasonable count for a server that appears to integrate a general memory/query system with a specific CRM (Zendesk). However, the Zendesk subset is minimal (5 tools), which may feel slightly under-scoped for a full ticketing system.

Completeness2/5

The Zendesk tools cover only basic read operations (get ticket, get user, list tickets/users, search tickets). Missing critical write operations like creating or updating tickets/users, making the Zendesk integration incomplete. The Pipeworx memory tools are self-contained but unrelated.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool automatically picks the right tool and fills arguments, which is a key behavioral trait. With no annotations provided, the description carries the burden and handles it well by explaining its decision-making approach. It doesn't mention limits like query complexity or data recency, but overall it's transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (three sentences) and front-loaded with the core action. Every sentence adds value: the first states the purpose, the second explains the mechanism, and the third provides examples. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (one required parameter, no output schema, no nested objects), the description is complete. It explains how the tool works, what it does, and provides examples. There are no gaps given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter 'question', which is already described as 'Your question or request in natural language'. The description adds no additional meaning beyond the schema, so a baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to answer natural language questions by selecting the best data source, filling arguments, and returning results. It explicitly distinguishes itself from sibling tools that require manual tool selection and schema learning, with examples illustrating its use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use the tool: for asking questions in plain English without needing to browse tools or learn schemas. It also gives examples of appropriate queries. However, it does not explicitly state when not to use it or mention alternative tools for specific cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It clearly states the tool's behavior: it searches and returns relevant tools with names and descriptions. It does not mention any destructive side effects, auth requirements, or performance characteristics, but given the search/read nature, the description is adequate. However, it doesn't mention that it uses natural language querying or that it returns results ordered by relevance, which would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences) with no filler. The first sentence states purpose, the second provides usage context. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 params, no output schema, no nested objects), the description covers all essential aspects: what it does, when to use it, and the nature of its input. No output schema is needed as the return is described ('tools with names and descriptions').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so baseline is 3. The description adds value by explaining that the query is a natural language description and gives concrete examples (e.g., 'analyze housing market trends'), which enriches the schema's description. The limit parameter is well-explained in the schema, and the description doesn't add much for it. Overall, the description complements the schema well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: searching a tool catalog by describing what you need, returning relevant tools with names and descriptions. It distinguishes itself from siblings by being the discovery/search tool among 500+ tools, whereas siblings like ask_pipeworx, remember, etc., serve different functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises to call this tool FIRST when needing to find the right tools among 500+ options. This provides clear when-to-use guidance and sets priority context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It states it deletes a memory, but doesn't mention permanence, side effects (e.g., cascade effects), error conditions (e.g., key not found), or access permissions. For a destructive operation, more transparency is needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 5 words. Extremely concise and front-loaded with the action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple delete operation with one required parameter and no output schema, the description is minimally adequate. However, it lacks context about return value, error handling, and idempotency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already describes the parameter. The description adds no additional semantics beyond 'by key', but the schema description is adequate for this simple case.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Delete), the target (a stored memory), and the identifier (by key). It succinctly distinguishes the tool from siblings like 'recall' and 'remember'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a specific memory needs to be deleted, but provides no guidance on when to use alternatives (e.g., recall for reading, remember for storing). No when-not-to-use or prerequisites mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the tool retrieves or lists memories, which implies read-only behavior. However, it does not disclose whether this operation is always safe, if there are any side effects, or how the return format looks. A 3 is appropriate as it gives basic behavioral context but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences, front-loaded with the main purpose and a usage hint. Every sentence adds value with no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description could mention the return format or that it returns memory content or keys. However, for a simple retrieval tool with one optional parameter, the description is sufficiently complete for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds value by explaining that omitting the key lists all memories, which is not in the schema. This extra semantic information justifies a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a memory by key or lists all memories if key is omitted. The verb 'retrieve' and resource 'memory' are specific, and the dual behavior is explicitly described, distinguishing it from siblings like 'remember' and 'forget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool ('to retrieve context you saved earlier') and notes the two modes (by key vs. list all). While it does not explicitly mention when not to use it or name alternatives, the context is clear enough for an agent to decide.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses behavioral traits: persistent memory for authenticated users, 24-hour session for anonymous. No annotations provided, so description carries full burden and does well. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, no wasted words. Front-loaded with the core action, then usage guidance, then behavioral notes. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and two simple parameters, description is complete enough. It covers purpose, usage, persistence details. Missing explicit mention of return value (likely success confirmation), but minimal for this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for both 'key' and 'value'. Description adds context about what kinds of values to store (findings, addresses, etc.) and example keys. Beyond schema, it explains the persistence behavior tied to authentication, adding value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly describes storing a key-value pair in session memory, specifying verb ('store'), resource ('key-value pair in session memory'), and purpose. Distinguishes from sibling tools like 'forget' and 'recall' by its unique role.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States when to use: to save intermediate findings, user preferences, or context across tool calls. Provides context about persistence (authenticated vs anonymous). Does not explicitly say when not to use or mention alternatives, but usage is well implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

zd_get_ticketBInspect

Get a Zendesk ticket by ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesTicket ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It does not mention return format, error behavior, rate limits, or authentication needs. 'Get' suggests read-only, but no explicit statement.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded verb+resource. Efficient but could mention sibling tools or when to use.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple tool with one parameter, but no output schema. Description should at least hint at return structure or common fields. Incomplete for an agent to understand what it gets.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and schema describes 'id' as 'Ticket ID'. Description adds no extra meaning beyond schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a clear verb ('Get'), resource ('a Zendesk ticket'), and parameter ('by ID'). It distinguishes from siblings like zd_list_tickets and zd_search_tickets, which have different verbs or parameters.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this vs siblings. It implies usage when you have a ticket ID, but does not mention alternatives for searching or listing tickets.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

zd_get_userCInspect

Get a Zendesk user by ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUser ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does not mention whether the operation is read-only, what happens if the user doesn't exist, or any side effects. The description is too brief for a tool with no annotation support.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, short sentence that is front-loaded and to the point. It contains no fluff, but could include more context without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no output schema, the description should explain what the return value looks like or any edge cases. It does not, leaving the agent to infer the response format. The description is incomplete for a simple get operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the parameter 'id' is already well-documented in the schema. The description adds no additional semantics beyond 'by ID', which is already implied. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get a Zendesk user by ID', with a specific verb ('Get'), resource ('Zendesk user'), and identifier method ('by ID'). While it distinguishes from sibling tools (zd_get_ticket, zd_list_users), it does not explicitly differentiate itself from similar get operations, but the context makes it clear enough.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like zd_search_tickets or zd_list_users. The description is minimal and does not indicate prerequisites, limitations, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

zd_list_ticketsCInspect

List recent Zendesk tickets.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number
sort_byNoSort field (created_at, updated_at, priority, status)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry burden. It does not disclose pagination limits, rate limits, or that only recent tickets are listed. The term 'recent' is ambiguous.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Very short description (one sentence), which is concise but lacks necessary details. No wasted words, but incomplete for a list operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description should explain return format or pagination. It does not. Given parameter count and complexity, description is insufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. However, description does not explain meaning of parameters beyond schema, such as default values for sort_by or page size.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'List recent Zendesk tickets,' which is a clear verb and resource. However, it does not differentiate itself from sibling tools like 'zd_search_tickets' or 'zd_get_ticket', and 'recent' is vague.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like zd_search_tickets for filtering. No exclusions or context for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

zd_list_usersCInspect

List Zendesk users.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It does not mention pagination limits, ordering, or whether it lists all users or only active ones. The behavior is under-specified for a list operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single short sentence, which is concise but arguably too terse. Could include more context without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (1 param, no output schema), the description is incomplete. It doesn't explain the response format, pagination behavior, or any default limits. More context is needed for a list operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter ('page') already described in the schema. The description adds no further meaning beyond the schema. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and the resource 'Zendesk users', distinguishing it from sibling tools like zd_get_user (single user) and zd_search_tickets (search). It's concise and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like zd_search_users (if exists) or zd_get_user. The description is minimal and provides no context about filtering or pagination.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

zd_search_ticketsBInspect

Search Zendesk tickets with a query string.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query (e.g., "status:open priority:high")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description is minimal. It does not disclose what the tool returns (e.g., list of tickets or count), any pagination behavior, rate limits, or authentication requirements. The tool performs a search operation, but no behavioral traits are explained.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that gets straight to the point. It is appropriately short for a tool with a single parameter.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple parameter set and no output schema, the description is minimally adequate. However, for a search tool, users might benefit from knowing the expected output format or any limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage and describes the query parameter with an example. The description adds no additional semantics beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Search' and the resource 'Zendesk tickets', and mentions the use of a query string. However, it could better distinguish from sibling tools like zd_list_tickets which also lists tickets, though that tool likely does not support search queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching tickets with a query string but does not provide explicit guidance on when to use this vs alternatives like zd_list_tickets. It could mention that zd_list_tickets is for basic listing without search.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.