0nMCP — Universal AI API Orchestrator
Server Details
1,589 tools, 102 services. Stripe, Slack, CRM, GitHub, Supabase + 97 more. One install.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- 0nork/0nMCP
- GitHub Stars
- 5
- Server Listing
- 0nMCP
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
16 toolsadd_contact_noteCInspect
Add a note to a CRM contact
| Name | Required | Description | Default |
|---|---|---|---|
| body | Yes | ||
| contactId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to specify critical mutation traits: whether 'add' creates a new persistent note object or appends to existing text, whether the operation is idempotent, what occurs if the contactId does not exist, or whether there are content limits/rate limiting for the body parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single front-loaded sentence of six words with no redundancy or filler. However, the extreme brevity verges on underspecification given the lack of supporting schema documentation, suggesting conciseness may have been achieved at the expense of necessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a mutation tool with zero annotation coverage, zero schema descriptions, and no output schema, the description should comprehensively document parameters, behavior, and return expectations. The current five-word description is inadequate for the information gap it must fill.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, leaving both 'contactId' and 'body' undocumented in structured metadata. The description mentions 'contact' and 'note' which implicitly map to these parameters, but provides no format guidance (e.g., UUID vs. integer for contactId, markdown support or character limits for body). Insufficient compensation for the schema deficit.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the core action ('Add') and target resource ('note to a CRM contact'), providing basic orientation. However, it lacks specificity regarding what constitutes a 'note' in this context (e.g., timestamped log entry, freeform text, structured comment) and does not differentiate from sibling tools like 'add_contact_tags' or 'update_contact' which might also modify contact records.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives (e.g., 'update_contact' for modifying contact fields vs. adding narrative notes). No prerequisites are mentioned, such as requiring an existing contactId from 'create_contact' or 'get_contact', and no exclusions or failure modes are documented.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
add_contact_tagsCInspect
Add tags to a CRM contact
| Name | Required | Description | Default |
|---|---|---|---|
| tags | Yes | ||
| contactId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Zero annotations provided, so description carries full burden, yet it fails to disclose critical mutation behavior: does 'Add' append to existing tags or overwrite? Are duplicates allowed? What error occurs if contactId is invalid? Missing safety and behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely brief (5 words) with no filler. However, brevity crosses into under-specification given the lack of schema documentation and annotations. Structure is acceptable though minimal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Incomplete for a mutation tool: no output schema explanation, no error scenarios, no confirmation of what 'adding' means (append vs. replace). With 0% schema coverage and no annotations, description should provide significantly more context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage (neither contactId nor tags have descriptions). Description mentions 'tags' and 'CRM contact' but adds no semantic detail: no ID format hints, no tag constraints (max length, uniqueness), no example values. Merely repeats parameter names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the verb (Add), resource (tags), and target (CRM contact), but fails to distinguish from sibling tool `update_contact` which could presumably also modify tags. Adequate but not specific about unique purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus `update_contact` or other mutation tools. No mention of prerequisites (e.g., contact must exist) or idempotency concerns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_contactCInspect
Create a new CRM contact
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | ||
| Yes | |||
| phone | No | ||
| lastName | No | ||
| firstName | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It only states the action ('Create') without revealing side effects, idempotency behavior (what happens if the contact exists), required permissions, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It is appropriately front-loaded with the action verb. However, given the tool's complexity (5 parameters, 0% schema coverage), it is underspecified rather than elegantly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with five parameters, zero schema descriptions, no annotations, and no output schema, the description is incomplete. It lacks critical context regarding required fields, duplicate handling, and differentiation from related tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description fails to compensate by explaining any of the five parameters (email, firstName, lastName, phone, tags). Critically, it omits that 'email' is required, which is essential for correct invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a clear verb ('Create') and resource ('CRM contact'), specifying the core function. However, it does not explicitly differentiate from siblings like 'update_contact' or 'add_contact_note' (e.g., by stating this is for new records only).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus alternatives like 'update_contact' (for existing records) or handling duplicate creation scenarios. The word 'new' implies usage for fresh records but lacks explicit when-not-to-use constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_opportunityCInspect
Create a new opportunity/deal
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| stageId | Yes | ||
| contactId | No | ||
| pipelineId | Yes | ||
| monetaryValue | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of disclosing behavioral traits. 'Create' implies a write operation, but the description omits idempotency, validation rules (e.g., whether pipelineId/stageId must reference existing records), or failure modes. It does not disclose if the operation is atomic or if partial creation is possible.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
At four words, the description is concise but falls into under-specification rather than efficient information density. It lacks front-loaded value for a tool with five parameters and zero schema documentation; a slightly longer description explaining key parameters or prerequisites would be appropriate.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with five undocumented parameters (0% coverage), no annotations, and no output schema, the description is incomplete. It lacks critical context such as required prerequisites (valid pipeline/stage IDs), parameter semantics, and success/failure behaviors necessary for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must compensate by explaining parameters, but it fails to do so. The synonym 'deal' hints at the domain but provides no specifics about the required 'name', 'pipelineId', 'stageId' or optional 'contactId' and 'monetaryValue' parameters, their formats, or relationships.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb ('Create') and resource ('opportunity/deal'), clarifying this is a sales/CRM object creation tool. However, it lacks differentiation from siblings like 'create_contact' or prerequisite relationships with 'get_pipelines' (required to obtain the pipelineId parameter).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'search_opportunities' (to check for existing deals) or 'update_contact'. It fails to mention that 'get_pipelines' is likely a prerequisite to obtain valid pipelineId/stageId values.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_appointmentsCInspect
Get appointments in a date range
| Name | Required | Description | Default |
|---|---|---|---|
| endTime | No | ||
| startTime | No | ||
| calendarId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Fails to disclose critical behaviors: read-only nature, pagination limits, timezone handling, whether cancelled appointments are included, or what occurs when optional calendarId is omitted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is efficient with no redundancy, but length is below adequate given zero schema documentation. Front-loaded but under-specified rather than optimally concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Insufficient for a 3-parameter tool with 0% schema coverage and no annotations/output schema. Missing parameter details, return structure, error conditions, and filtering semantics needed for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. Description mentions 'date range' conceptually (covering startTime/endTime) but provides no format guidance (ISO 8601? timestamps?) and completely omits explanation of calendarId parameter purpose or behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Get) and resource (appointments) with scope constraint (date range). Clear basic purpose, though could better clarify relationship to sibling get_calendars (appointments vs calendars).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus search_contacts or get_calendars, nor any mention of prerequisites like timezone requirements or calendar selection behavior.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_calendarsBInspect
List all CRM calendars
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. While 'List all' implies read-only and unfiltered results, it omits pagination behavior, return structure, or what 'CRM calendars' specifically contains (IDs, names, colors?).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four words, front-loaded with action and target. Extremely efficient though arguably underspecified given lack of supporting metadata (annotations/output schema).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, description covers the basic operation but lacks detail on return values or calendar model. Adequate for simple list endpoint but minimal.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters per schema. Per rubric, 0 params establishes baseline 4. Description mentions 'all' which confirms no filtering parameters exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'List' and resource 'CRM calendars', but fails to distinguish from sibling 'get_appointments' (calendars vs appointments/events) or clarify if this returns calendar metadata vs events.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this vs alternatives like 'get_appointments' or how it relates to the location/pipelines workflow. No prerequisites or timing mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_contactCInspect
Get a CRM contact by ID
| Name | Required | Description | Default |
|---|---|---|---|
| contactId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, yet the description fails to disclose what happens if the contact ID does not exist, what fields are returned, or any permission requirements. It implies a read operation but lacks explicit safety or behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at five words with no filler. However, the brevity borders on under-specification for a tool with no output schema or annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single simple parameter and lack of output schema or annotations, the description meets minimal viability but fails to complete the picture regarding error conditions, return value structure, or ID format expectations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage for the contactId parameter. The description compensates minimally by stating the operation is 'by ID', confirming the parameter's purpose, but provides no format constraints, examples, or validation rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States a specific verb (Get), resource (CRM contact), and access pattern (by ID). The 'by ID' phrasing implicitly distinguishes this from the sibling 'search_contacts' tool, though it does not explicitly name the alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus 'search_contacts' or other alternatives, nor does it mention prerequisites such as needing to obtain the ID first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_custom_fieldsBInspect
List all custom fields
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, yet description fails to disclose behavioral traits like pagination limits, response format, or whether results are cached. The phrase 'List all' implies a complete dump but doesn't confirm if filtering or pagination parameters exist (they don't, but this isn't clarified).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient four-word description with zero redundancy. Front-loaded with action verb immediately followed by target resource. No filler words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter read operation without output schema, the description minimally suffices to identify the returned resource. However, given the CRM context with multiple entities (contacts, opportunities, appointments), failing to specify which entity's custom fields are returned leaves a significant ambiguity gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present; per rule baseline is 4. The description doesn't need to compensate for missing schema documentation since input schema coverage is 100% (empty schema).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Provides specific verb 'List' and resource 'custom fields', but lacks scope clarification (e.g., contact vs. opportunity fields) that would distinguish it from sibling getters in this CRM context. Adequate but missing the specificity seen in high-scoring examples.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Offers no guidance on when to use this tool versus alternatives, prerequisites, or whether it should be called before creating/updating contacts. No exclusions or context provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_locationBInspect
Get CRM location details
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. It fails to mention that this is a read-only operation, what specific data is returned, whether it returns single or multiple locations, or any permission requirements. The behavioral traits are left entirely implicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief at four words with no redundant content. While efficient, it borders on under-specification—the brevity leaves no room for the contextual details needed given the lack of annotations or output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool without output schema or annotations, the description minimally identifies the operation. However, it lacks explanation of what constitutes a 'location' in this CRM context and what data structure is returned, leaving gaps in contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters and 100% schema coverage (empty schema), the baseline applies. The description does not need to compensate for missing schema documentation since there are no parameters to describe.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the specific action ('Get') and resource ('CRM location details'), clearly indicating it retrieves location data. However, it doesn't clarify what 'location' refers to (business address, office location, etc.) or distinguish scope from sibling tools like get_contact.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., when to retrieve location details vs contact details), nor does it mention prerequisites or constraints for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pipelinesBInspect
List all CRM pipelines and stages
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, description carries full burden. States 'List all' indicating scope, but fails to disclose read-only/safe nature, expected return structure, or whether results are paginated. Minimal behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at 6 words. Front-loaded with verb first. No redundancy, though brevity sacrifices helpful context about return values or pipeline/stage relationships that would benefit agent selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a zero-parameter tool, but lacks expected return value documentation given no output schema exists. Should explain what constitutes a pipeline vs stage or how results map to opportunity creation workflows.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Baseline 4 per instructions (0 parameters). Description appropriately does not invent parameters that don't exist in the empty input schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'List' and specific resources 'CRM pipelines and stages'. Distinguishes from siblings that manage contacts, opportunities, or appointments, though it could explicitly clarify relationship to opportunity-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when/when-not guidance provided, but usage is implied by the unique resource type (pipelines) among siblings. Lacks mention of typical use case (e.g., retrieving pipeline IDs for create_opportunity).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_contactsCInspect
Search CRM contacts by name, email, or phone
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, yet description omits critical behavioral details: whether results are paginated, what the default limit behavior is (given limit parameter exists), case sensitivity, or empty result handling. 'Search' implies read-only safety but doesn't confirm.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise (6 words) and front-loaded with action. Efficient but overly terse given the complete lack of schema documentation and behavioral transparency - sacrifices necessary detail for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate coverage given 0% schema description. With no output schema and no annotations, description should explain parameters (especially 'limit') and return behavior, but leaves significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (no parameter descriptions). Description partially compensates by indicating 'query' searches across name, email, or phone fields, but leaves 'limit' completely undocumented with no indication of default values or maximum allowed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Search') and resource ('CRM contacts') with specific searchable fields (name, email, phone). Distinguishes from sibling 'search_opportunities' and 'search_conversations' by specifying CRM contacts, though could better differentiate from 'get_contact' (likely single-record retrieval vs. search).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus 'get_contact' or other retrieval methods. No information on search syntax (partial vs. exact matching), case sensitivity, or required query format.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_conversationsDInspect
Search CRM conversations
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| contactId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure. It fails to explain what data is returned (conversation objects? message threads?), pagination behavior, available filters beyond contactId, rate limits, or whether this searches across all channels (email, SMS, chat) or specific ones.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While the three-word description is not verbose, it represents under-specification rather than efficient information density. For a tool with two undocumented parameters and no output schema, this length is inadequate and leaves critical gaps.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of 2 undocumented parameters, no annotations, no output schema, and ambiguous resource scope, the description is completely inadequate. It provides no information about search capabilities, result format, or filtering behavior necessary for an agent to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% with no descriptions on either parameter. The description fails to compensate by explaining that contactId filters to a specific contact's conversations, or that limit controls result pagination. The semantics of both parameters remain undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the verb (search) and resource (CRM conversations), but 'CRM conversations' is vague and undefined. It fails to distinguish what constitutes a conversation versus emails, calls, or messages handled by sibling tools like send_email or get_appointments. It barely exceeds a tautology by adding the domain 'CRM'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No usage guidance is provided. There is no indication of when to use this versus search_contacts (which might also retrieve communication history), whether contactId is required or optional for searching, or what search syntax is supported.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_opportunitiesCInspect
Search CRM opportunities/deals
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | ||
| pipelineId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. 'Search' implies read-only but confirms no behavioral details (fuzzy vs exact matching, pagination, performance limits).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely terse (three words) but not verbose. Front-loaded though underspecified. Appropriate density but insufficient content for the information gaps present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate given zero schema coverage and no output schema/annotations. For a 2-parameter search tool, the description must explain parameters and search behavior, neither of which is present.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% with zero parameter descriptions. Description fails to compensate by explaining 'query' (full-text? ID?) or 'pipelineId' (filter? required context?), leaving both parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States basic verb+resource (Search CRM opportunities) but lacks specificity about search scope or fields. The '/deals' synonym adds slight clarity but remains minimal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus create_opportunity or other sibling tools. No prerequisites or alternative suggestions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
send_emailCInspect
Send an email to a contact
| Name | Required | Description | Default |
|---|---|---|---|
| subject | Yes | ||
| htmlBody | Yes | ||
| contactId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure, yet reveals nothing about success/failure handling, email validation rules, queueing behavior, or whether this creates conversation records. It does not disclose if the operation is synchronous or asynchronous.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The five-word description is efficiently front-loaded with no redundant text, though it arguably sacrifices necessary detail for brevity. Every word earns its place, but the extreme conciseness contributes to underspecification.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given three required parameters with zero schema documentation, no annotations, and no output schema, the description inadequately covers a mutation operation that likely involves external service integration. It should address error scenarios, contact resolution logic, and delivery confirmation behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% and the description adds no compensatory detail. It does not clarify whether 'contactId' expects an email address or internal database ID, nor does it explain HTML body constraints, supported tags, or size limits critical for email composition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action (send) and resource (email) with recipient context (contact). However, it fails to differentiate from the sibling 'send_message' tool, leaving ambiguity about whether this is for email specifically vs. other messaging channels.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided regarding when to use this tool versus alternatives, prerequisites such as contact existence verification, or expected workflow integration. The agent must infer usage solely from the tool name and parameter names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
send_messageCInspect
Send a message (Email, SMS, WhatsApp)
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | ||
| message | Yes | ||
| subject | No | ||
| contactId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It omits delivery guarantees, failure modes, rate limiting, whether sending is synchronous, and what records (if any) are created in the system.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single short sentence that is front-loaded, but it suffers from under-specification rather than efficient conciseness. It wastes no words but fails to earn its place by omitting critical usage context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter mutation tool with 0% schema description coverage and no output schema, the description is insufficient. It doesn't explain the optional 'subject' parameter's conditional usage, authentication requirements, or success/failure indicators.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, requiring the description to compensate, but it fails to explain parameter semantics. It doesn't clarify that 'subject' likely only applies to Email, what format 'contactId' expects, or constraints on 'message' length/content. It only implies the 'type' enum values via the parenthetical channel list.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the verb (Send) and resource (message) and lists the supported channels (Email, SMS, WhatsApp). However, it fails to distinguish from the sibling 'send_email' tool, creating ambiguity about which tool to use for email operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the 'send_email' sibling, nor any mention of prerequisites (e.g., whether contactId must exist) or recommended use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_contactCInspect
Update a CRM contact
| Name | Required | Description | Default |
|---|---|---|---|
| No | |||
| phone | No | ||
| lastName | No | ||
| contactId | Yes | ||
| firstName | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'Update' implies mutation, the description fails to clarify whether omitted fields are preserved (PATCH) or cleared (PUT), what happens if the contactId is invalid, or any side effects like triggering workflows.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief at four words, but the conciseness is underutilized. Given the lack of schema documentation and annotations, this brevity represents a failure to communicate necessary context rather than efficient information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for the tool's complexity: five undocumented parameters, no annotations, and no output schema. The description should explain the partial update pattern (implied by optional fields) and identify which parameter is the resource identifier, but provides neither.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and five parameters, the description fails to compensate by explaining parameter semantics. It does not explain what 'contactId' represents (UUID vs. external ID), nor does it clarify that optional fields enable partial updates. Only the minimal implication that a contact must exist provides slight value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a clear verb ('Update') and resource ('CRM contact'), avoiding tautology. However, it fails to distinguish from sibling tools like 'create_contact' or 'get_contact', and lacks specificity about what 'CRM' refers to in this context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus 'create_contact' or how to handle partial updates versus full replacements. No mention of prerequisites (e.g., that the contactId must exist) or error handling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.