Zoho_crm
Server Details
Zoho CRM MCP Pack — wraps the Zoho CRM API v6
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-zoho_crm
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 9 of 10 tools scored. Lowest: 2.9/5.
The tools split into two clear groups: Pipeworx utilities (ask_pipeworx, discover_tools, memory) and Zoho CRM CRUD tools. Within each group, purposes are distinct, but ask_pipeworx and discover_tools could overlap if an agent uses ask_pipeworx to find tools instead of discover_tools.
Naming is inconsistent: Pipeworx tools use simple verbs (ask_pipeworx, discover_tools, forget, recall, remember) while Zoho tools use a verb_noun pattern (zoho_create_record, zoho_get_record, etc.). Mixing conventions and the use of a prefix only on some tools reduces coherence.
10 tools is a reasonable number for a server combining a CRM connector and memory utilities. The count is not excessive and each tool seems justified.
Zoho CRM tools cover create, read, list, and search but lack update and delete operations, which are common CRUD needs. The memory tools are basic (get/set/delete) but sufficient. The domain is somewhat underspecified, leaving gaps for typical CRM workflows.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes the internal behavior: picks the right tool and fills arguments. With no annotations provided, this is helpful context. However, it doesn't disclose potential latency, error handling, or limitations (e.g., if it cannot find a suitable tool).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences efficiently convey the tool's role, operation, and examples. Every sentence adds value, with examples front-loaded after the core description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with full schema coverage and no output schema, the description is complete. It explains the query mechanism, the AI routing behavior, and provides illustrative examples.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already covers the single parameter (question) with 100% description coverage. Description adds value by explaining how the parameter is used ('describe what you need') and providing example values, which aids understanding beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool's purpose: answering plain English questions by selecting the best data source automatically. It distinguishes itself from sibling tools by acting as an intelligent router, unlike the specific Zoho or memory tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises when to use: when you have a natural language request and don't want to browse tools or learn schemas. Provides three concrete examples covering different domains (trade, adverse events, SEC filing).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Description states it searches and returns relevant tools, but does not disclose behavior like whether it modifies state, has rate limits, or requires authentication. However, for a search tool, this is typical and the description is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each adding distinct value: first states action and result, second states return format, third states when to use. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is a search/discovery tool with simple input and no output schema. Description is sufficient for agent to understand purpose and when to use. Could mention that it doesn't modify state, but given the nature, it's fine.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both parameters have descriptions). The description adds context that query is a natural language description, but this is already in the schema's parameter description. The description does not add new meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Search' and resource 'tool catalog' with clear intent: 'by describing what you need'. Explicitly states return value 'most relevant tools with names and descriptions' and distinguishes from siblings by being the discovery tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' Provides clear when-to-use context and implies it's the entry point before using other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description accurately states 'Delete', which implies destructive behavior. However, no annotations are provided, and the description does not add further behavioral context such as confirmation, reversibility, or side effects. It is adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that fully conveys the purpose. Every word is necessary, and there is no extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 required parameter, no output schema, no nested objects), the description is sufficient. However, it could be improved by noting that the deletion is permanent or by referencing the 'recall' tool for verification.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%: the parameter 'key' is described as 'Memory key to delete' in the schema. The tool description adds no further meaning beyond what the schema already provides, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Delete') and resource ('stored memory by key'), clearly stating the action and scope. It distinguishes itself from sibling tools like 'recall' (retrieve) and 'remember' (store), which handle different operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description is clear about what it does (delete by key) but does not explicitly state when to use it versus alternatives. The sibling tools imply context (e.g., 'remember' for storing, 'recall' for retrieving), but no direct guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It states that memories are saved 'in the session or in previous sessions', implying persistence, but does not clarify scope (e.g., session-level vs cross-session, or any side effects). It does not disclose if retrieval is read-only or has any state changes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, concise and front-loaded with the core action. Every sentence adds value: first explains the action, second provides context for when to use it.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (one optional parameter, no output schema), the description adequately covers the tool's purpose and usage. However, it does not mention what is returned (e.g., the memory value or a list of keys), which might require the agent to infer from sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter described in the schema. The description adds value by explaining that omitting the key lists all memories, which is not explicitly stated in the schema's 'omit to list all keys' hint. However, the description could further clarify the behavior when the key does not exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Retrieve' and the resource 'stored memory', with explicit distinction between retrieving a specific key and listing all memories. It also clarifies the tool's use for retrieving context saved earlier, differentiating it from sibling tools like 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use the tool ('retrieve context you saved earlier') and when to omit the key ('omit key to list all stored memories'). It also implies when not to use it (e.g., for storing, use 'remember'). This provides clear guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses persistence duration (24 hours for anonymous, persistent for authenticated), which is critical behavioral context beyond the schema. No contradictions found.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each serving a purpose: operation definition, use cases, and persistence behavior. No wasted words, front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple key-value store with two required string params and no output schema, the description covers purpose, usage, and behavioral nuances (persistence). It does not detail size limits or overwrite behavior, but is adequate for this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining the purpose of the key-value pair ('intermediate findings, user preferences, context') and provides examples for key values. This extra context raises the score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool stores a key-value pair in session memory, specifying the verb 'store' and the resource 'session memory'. It distinguishes from siblings like 'recall' (retrieve) and 'forget' (delete) by focusing on write operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides usage context: saving intermediate findings, user preferences, or context across tool calls. It also notes persistence differences (authenticated vs anonymous), but does not explicitly state when not to use or compare to alternatives like 'discover_tools'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zoho_create_recordBInspect
Create a new record in a Zoho CRM module.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | Record data as key-value pairs (e.g., { "Last_Name": "Smith", "Email": "john@example.com" }) | |
| module | Yes | Module name (e.g., Leads, Contacts, Deals) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries full burden. It correctly indicates a mutation operation ('Create'), but lacks details on side effects (e.g., validation, required fields, or conflict handling). The description is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that communicates the core purpose. It is front-loaded and avoids unnecessary detail, but could be slightly more informative without losing brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (2 parameters, no output schema, nested objects), the description is minimally complete. It covers the basic function but omits useful details like typical return values or error scenarios that would aid agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add extra meaning beyond the schema; it only restates the module and data parameters without additional context. The schema itself provides clear descriptions for both parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Create' and the resource 'record in a Zoho CRM module'. It distinguishes from siblings like zoho_get_record and zoho_search_records by focusing on creation, though it does not explicitly contrast with other tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (creating new records) but does not provide explicit guidance on when not to use it or alternatives. Given siblings like zoho_get_record and zoho_search_records exist, stating exclusions would improve clarity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zoho_get_recordCInspect
Get a single record by ID from a Zoho CRM module.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Record ID | |
| module | Yes | Module name (e.g., Leads, Contacts, Deals) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description bears full burden. It does not disclose whether the tool is read-only, what happens if the ID is invalid, rate limits, or any side effects. The description only states the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, directly to the point. Could be considered slightly under-specified but is appropriately concise given the simplicity of the operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with 2 parameters and no output schema, but the description lacks context about error handling, return value structure, or authentication. Siblings like zoho_create_record might need more detail, but this one is basic enough that a slightly fuller description would help.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add any extra meaning beyond what the schema provides (id and module). The module description in schema lists examples, which is sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Get), resource (record), and scope (single record by ID from a Zoho CRM module). It distinguishes from siblings like zoho_list_records and zoho_search_records, which handle multiple records.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives. For instance, zoho_search_records might be more appropriate when filtering criteria are needed. The description does not mention prerequisites like required permissions or module existence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zoho_list_modulesAInspect
List all available modules in Zoho CRM.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It correctly indicates this is a read operation (listing), which is transparent. However, it does not disclose any potential side effects, permissions needed, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, complete sentence with no wasted words. It is front-loaded with the action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no input schema parameters and no output schema, and the description is sufficient for a simple list operation. However, it could mention that the output lists module names or IDs to aid downstream tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so the description adds no param info. With schema description coverage at 100%, baseline is 3. Since no parameters exist, the description appropriately omits param details, earning a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list', the resource 'all available modules', and the system 'Zoho CRM'. It distinguishes itself from siblings which operate on records, not modules.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies that this tool is for discovering available modules before using record-related tools. However, it does not explicitly state when to use it versus other tools, nor does it mention any prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zoho_list_recordsAInspect
List records from a Zoho CRM module (e.g., Leads, Contacts, Deals).
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1) | |
| fields | No | Comma-separated field names to return (e.g., "Last_Name,Email,Phone"). Defaults to common fields. | |
| module | Yes | Module name (e.g., Leads, Contacts, Deals, Accounts) | |
| per_page | No | Records per page (max 200, default 20) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It indicates the tool lists records but does not disclose pagination behavior, rate limits, or whether it returns all records by default. Moderate transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action and examples. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description could explain return structure. However, with 4 params well-documented in schema, description is adequate but not complete for a list operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so description adds little beyond schema. It mentions module examples but does not elaborate on page, fields, or per_page parameters. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists records from a Zoho CRM module and gives examples (Leads, Contacts, Deals). It distinguishes itself from siblings like zoho_get_record (single record) and zoho_search_records (search) but could be more precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing records but does not explicitly state when to use this vs. alternatives like zoho_search_records. No guidance on when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zoho_search_recordsBInspect
Search records in a Zoho CRM module using criteria.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1) | |
| module | Yes | Module name (e.g., Leads, Contacts, Deals) | |
| criteria | Yes | Search criteria (e.g., "(Last_Name:equals:Smith)") | |
| per_page | No | Records per page (max 200, default 20) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It mentions 'search' but lacks details like pagination behavior (implied via page/per_page params) or performance implications. The input schema covers parameter descriptions, so score is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, efficient and front-loaded. However, could be slightly more informative without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and simple structure, description covers the core function. Lacks details on criteria format or return type, but schema provides some context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description adds no extra parameter context beyond what the schema provides, but the schema is sufficiently descriptive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches records in a Zoho CRM module using criteria, distinguishing it from siblings like zoho_get_record (single record) and zoho_list_records (all records).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for search with criteria, but does not explicitly contrast with list_records or get_record, nor provide when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!