xmagnet
Server Details
xmagnet ā AI-powered B2B CRM for Claude. 35 tools that turn natural-language prompts into real CRM actions: prospect, enrich, score leads, manage deals, scan buying intent, run email campaigns and sequences, build forms and landing pages, refine ICP, and analyze performance ā all directly inside Claude.
š ONE-CLICK INSTALL: https://api.xmagnet.ai/claude
The install page guides Claude users through 3 steps in under a minute: open Claude Connectors, paste the connector name, paste the server URL, sign in. A reviewer workspace is auto-provisioned on first sign-in with sample contacts, deals, campaigns, and ICP suggestions, so every tool works end-to-end with zero setup. No 2FA. No paid plan required. Free tier exposes all 35 tools.
What you can do: ⢠Prospecting ā search_contacts, search_companies, search_investors, find_contacts_at_companies, enrich_contact, validate_email, find_competitors, company_intelligence ⢠Pipeline ā get_deals_pipeline, scan_deal_intent, get_ghost_pipeline, create_deal ⢠Campaigns & sequences ā create_campaign, generate_campaign_content, get_campaign_stats, get_bounce_stats, get_unsub_stats, create_sequence_draft, list_sequences ⢠Top of funnel ā suggest_icp, get_icp, create_form, list_forms, create_landing_page, list_landing_pages, show_suggestions ⢠Operations ā analyze_contacts, get_contact_details, update_contact, save_contacts_to_crm, export_contacts, get_dashboard_stats, get_credit_balance
Example prompts to try: ⢠"Find C-suite contacts at fintech companies that raised Series A in the last 6 months." ⢠"Scan my open deals for buying intent and prioritize follow-ups." ⢠"Generate a re-engagement campaign for contacts who opened my last newsletter but didn't reply." ⢠"Show me my deals pipeline by stage with weighted value and win rate." ⢠"Generate a landing page for my Q2 webinar with a registration form."
Built for founders, SDRs, RevOps, and growth teams who want their CRM to take action ā not just store records.
Install: https://api.xmagnet.ai/claude Ā· Site: https://xmagnet.ai Ā· Privacy: https://xmagnet.ai/privacy-policy Ā· Terms: https://xmagnet.ai/terms-of-service Ā· Support: ashish.sinha@xmagnet.ai
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 41 of 41 tools scored. Lowest: 2.8/5.
Most tools have distinct purposes, but there is overlap between search_contacts and find_contacts_at_companies, as both can find people by role and company criteria. Additionally, analyze_contacts and run_ai_report both offer industry breakdowns, creating potential confusion for an agent.
All tool names follow a consistent verb_noun pattern using snake_case, which is predictable and easy to parse. The verbs are varied but appropriate for the action.
With 41 tools, the server has an excessive number for a typical MCP surface. While the domain (CRM/email marketing) is broad, this count makes tool selection more complex and suggests scope creep.
The tool set covers a wide range of CRM operations: contact management, email campaigns, pipeline, analytics, enrichment, and forms. Minor gaps exist, such as missing delete operations for contacts and campaigns.
Available Tools
42 toolsadd_contactsAInspect
Add multiple contacts to the CRM in one call. Use when user provides a list of contacts to add/import. Each contact needs at least an email. Supports up to 200 contacts per call. Automatically deduplicates ā existing contacts are updated, new ones are created.
| Name | Required | Description | Default |
|---|---|---|---|
| contacts | Yes | List of contacts to add |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (readOnlyHint=false, destructiveHint=false), the description reveals that the tool performs an upsert operation: 'existing contacts are updated, new ones are created.' It also discloses a limit of 200 contacts and automatic deduplication, adding valuable behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with three short sentences. Each sentence provides essential information without redundancy. It is well-structured and front-loaded with the primary purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description does not explain return values or error handling. However, it covers key operational details (dedup, limit) and is sufficient for an agent to use the tool correctly. It could mention expected output or error scenarios, but it's mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds the constraint of a minimum email per contact (already in schema) and the batch limit of 200 contacts, which is not in the schema. Since schema coverage is 100% and the description provides additional practical info, it enhances understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Add multiple contacts to the CRM in one call.' It specifies the resource (contacts) and action (add/import), and distinguishes from siblings like `create_contact` (single) by emphasizing batch processing and deduplication behavior.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Use when user provides a list of contacts to add/import.' It also sets expectations with a limit of 200 contacts and deduplication behavior. However, it does not explicitly mention when not to use or compare with alternatives like `save_contacts_to_crm`.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
analyze_contactsARead-onlyInspect
AI analysis of a contact list ā breakdown by industry, title, company size, location, and engagement level.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Which contacts to analyze ā e.g. 'my leads', 'contacts added this month' | |
| campaign_id | No | Analyze contacts in a specific campaign |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive behavior. The description adds that the tool performs AI-driven breakdowns, which is consistent and adds context about the analytical nature. However, it does not disclose edge cases (e.g., empty results, handling of inaccessible contacts).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, concise sentence that front-loads the purpose and lists key breakdown dimensions. No extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description hints at return content (breakdown by categories) but omits structural details (e.g., whether results are grouped, raw counts, or percentages). It also doesn't clarify the default scope (e.g., all contacts vs. user's contacts). Moderate completeness given the simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so parameters are well-documented. The description lists breakdown categories but does not explicitly link them to the query or campaign_id parameters. It adds minimal semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('analysis') and the resource ('contact list'), and lists specific breakdown dimensions (industry, title, company size, location, engagement level). It distinguishes from sibling tools like search_contacts and get_contact_details by focusing on aggregate analysis rather than individual records.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for high-level analysis of contacts, but does not explicitly specify when to use this tool versus alternatives (e.g., search_contacts for individual details, get_campaign_stats for campaign-level stats). No exclusions or prerequisites are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apply_segment_to_campaignAInspect
Apply a saved segment to an existing campaign draft, adding all segment contacts as recipients. Use after list_segments when user picks a segment to add to a draft.
| Name | Required | Description | Default |
|---|---|---|---|
| segment_id | Yes | Segment ID from list_segments | |
| campaign_id | Yes | Campaign ID to add recipients to | |
| campaign_type | Yes | Campaign type |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=false, and the description clarifies that the tool modifies a campaign by adding recipients. This is consistent and adds context beyond annotations, though it does not detail whether existing recipients are replaced or merged.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, directly to the point, with no unnecessary words. It efficiently conveys purpose and usage guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with three parameters and no output schema, the description covers the essential action and context. It could mention success feedback but is otherwise adequate given the annotations and schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and all parameters are described sufficiently in the schema. The description only repeats those descriptions without adding extra context or constraints, so it meets the baseline but does not elevate meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('apply a saved segment') and the resource ('to an existing campaign draft'), with explicit mention of adding contacts as recipients. It distinguishes itself by noting usage after list_segments, setting it apart from sibling tools that handle campaign creation or listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides direct guidance: 'Use after list_segments when user picks a segment to add to a draft.' This tells the agent when to invoke the tool, though it does not explicitly exclude other scenarios or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_intelligenceARead-onlyInspect
Deep research on a company ā funding history, leadership, tech stack, recent news, headcount, and growth signals.
| Name | Required | Description | Default |
|---|---|---|---|
| website | No | ||
| company_name | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the description does not need to repeat that. However, it adds no additional behavioral context such as data freshness, rate limits, or result granularity beyond the listed categories.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, well-structured sentence that front-loads the core action ('Deep research') and efficiently lists the research dimensions without unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lists types of information returned but lacks any indication of the output format (structured fields vs free text) or behavior with missing data, which is important for agent processing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description should clarify parameter usage, but it only mentions the company name implicitly and omits the website parameter entirely, leaving the agent to infer parameter meaning from the tool's purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Deep research on a company' and lists specific information categories (funding, leadership, tech stack, etc.), distinguishing it from sibling tools like search_companies or find_competitors.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for comprehensive company research but does not provide explicit when-to-use or when-not-to guidance, nor does it mention alternative sibling tools for simpler or competitor-focused tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_campaignAInspect
Create a DRAFT email campaign. Saves as draft only ā no emails are sent. STRICT FLOW ā follow exactly in order:
FIND CONTACTS: Always call search_crm_contacts FIRST to find contacts from the user's own CRM. Only use search_contacts or search_investors (costs credits) if CRM returns 0 results.
CHOOSE TYPE: Ask the user ā NextGen (bulk/SES) or MyConvo (personal inbox). NEVER default to either.
SHOW OPTIONS: Call list_campaign_templates AND list_segments in parallel. Show templates and segments to user.
CREATE: User picks a template. Call create_campaign with template_id + contacts. NEVER skip any step. NEVER use paid search when CRM contacts exist.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Campaign name | |
| contacts | No | Contact objects to add as recipients | |
| template_id | Yes | REQUIRED. Template ID from list_campaign_templates. Always call list_campaign_templates first and let user pick. | |
| from_account | No | Sender email address (MyConvo only) | |
| campaign_type | Yes | Must be explicitly chosen by user ā never default |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the key behavioral trait that the campaign is saved as a draft and not sent automatically. Annotations indicate readOnlyHint=false and destructiveHint=false, which are consistent. However, there is no mention of permissions, side effects, or what happens on duplicate names. For a tool with minimal annotations, the description adds some value but could be richer.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise: two sentences with no unnecessary words. It front-loads the primary action and key caveat immediately, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 6 parameters, 3 required, no output schema, and few annotations, the description does not explain return values or additional constraints beyond the draft behavior. Some missing context like output format (e.g., returns campaign ID) would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already covers 83% of parameters with descriptions. The tool description does not add any additional parameter-specific meaning beyond what the schema provides. Since schema coverage is high, a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Create' and the resource 'draft email campaign'. It distinguishes the tool from others like 'create_sequence_draft' by explicitly noting the draft-only behavior, which is a key differentiator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit context for when this tool should be used: 'Saves as a draft only ā the campaign is not sent until reviewed and launched from xmagnet.' This implies that if immediate sending is needed, this tool is inappropriate. However, it does not explicitly contrast with sibling tools like 'create_deal' or 'create_sequence_draft', though the name and basic context make it clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_contactAInspect
Add a single contact directly to the CRM. Use when user says 'add', 'create', or 'save' a contact with specific details. Email is required. Supports: name, company, job_title, phone, linkedin_url, city, state, country, industry, lifecycle_stage (lead/prospect/customer/churned), notes. If a contact with the same email already exists, updates their info instead.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | ||
| Yes | Contact's email address (required) | ||
| notes | No | ||
| phone | No | ||
| state | No | ||
| company | No | ||
| country | No | ||
| industry | No | ||
| full_name | No | Full name (used if first/last not provided) | |
| job_title | No | ||
| last_name | No | ||
| first_name | No | ||
| linkedin_url | No | ||
| lifecycle_stage | No | lead, prospect, customer, or churned |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (readOnlyHint=false, destructiveHint=false) indicate mutation but not destruction. The description adds crucial upsert behavior: updates existing contacts if email matches. This goes beyond annotations and informs the agent of an important side effect.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and is reasonably concise. However, it lists many fields, which could be seen as redundant with the schema. It could be slightly tighter without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and minimal annotations, the description covers essential aspects: action, required field, field list, and upsert behavior. It lacks any mention of return values or error conditions, but for a creation tool, the description is largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has low description coverage (21%), but the description lists all supported fields. It does not explain details like name splitting (full_name vs first/last) or optionality, though the schema provides that for some fields. The description adds a list but lacks deeper semantic clarification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action ('add a single contact directly to the CRM') and specifies trigger phrases ('add', 'create', 'save'). It distinguishes this tool from sibling tools like update_contact and search_contacts by describing its create/upsert behavior.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use the tool (user says 'add', 'create', or 'save' a contact) and notes the required email. However, it does not provide explicit guidance on when NOT to use it or mention alternatives for bulk operations, which are available as siblings (e.g., save_contacts_to_crm).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_dealBInspect
Create a new deal in the CRM pipeline.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | ||
| title | Yes | Deal title | |
| value | No | Deal value in USD | |
| priority | No | ||
| company_name | No | ||
| contact_name | No | ||
| expected_close_date | No | Expected close date (YYYY-MM-DD) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a write operation (readOnlyHint=false, destructiveHint=false), but the description adds no further behavioral details like permissions or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence with no filler, effectively communicating the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 7 parameters, low schema coverage, and no output schema, the description is minimal and does not cover return values or creation behavior, leaving gaps for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is only 43% and the description does not explain parameters beyond what schema provides, missing an opportunity to add meaning for undocumented fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'create' and the resource 'deal' with context 'in the CRM pipeline', distinguishing it from sibling creation tools like create_campaign.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as other creation tools; the context is implied by the resource name but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_formBInspect
Create a new lead capture form using AI. Describe what you need and it generates the form.
| Name | Required | Description | Default |
|---|---|---|---|
| tone | No | e.g. professional, friendly, minimal | |
| prompt | Yes | Describe the form ā e.g. 'contact form for SaaS demo requests' | |
| form_type | No | e.g. contact, lead_capture, survey, registration |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate it is not read-only. The description adds no behavioral details beyond creation, such as side effects, permissions, or rate limits. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that front-loads the verb and resource. It is efficient but could be slightly expanded to mention optional parameters without harming conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description should at least hint at the return value. It says 'generates the form' but does not specify the form's format, structure, or what the agent will receive. Lacks completeness for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter descriptions. The description reinforces the 'prompt' parameter's role in AI generation but adds little additional semantic meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Create', the resource 'lead capture form', and the method 'using AI'. It distinguishes from sibling tools like create_campaign or create_deal by specifying the type of form and AI generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for AI-generated lead capture forms but offers no explicit when-to-use or alternatives. No exclusions or comparisons with sibling tools are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_landing_pageAInspect
Create a new landing page using AI. Describe what you need and it generates the page.
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | Describe the landing page ā e.g. 'SaaS product launch page with pricing' | |
| template_type | No | e.g. product, lead_gen, event, webinar |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are neutral; the description adds 'using AI', indicating a generative mutation, but lacks details on what happens to the page (e.g., publishing, requiring edits) or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at one sentence, front-loaded with action. Could mention the optional template_type briefly, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description should hint at return value (e.g., page ID); also missing mention of required vs optional parameters. Leaves agent guessing after invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents both parameters. The description adds no additional meaning beyond 'describe what you need' matching the prompt parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a landing page using AI from a user description, distinguishing it from siblings like 'create_campaign' or 'list_landing_pages'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It implies when to use: when you need an AI-generated landing page. However, no explicit when-not-to-use or alternatives are given, though no direct sibling conflict exists.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_sequence_draftAInspect
Create a draft multi-step email sequence for an existing campaign. Returns the sequence ready to activate in xmagnet.
| Name | Required | Description | Default |
|---|---|---|---|
| num_steps | No | Number of sequence steps to generate | |
| campaign_id | Yes | Source campaign ID | |
| campaign_name | No | ||
| campaign_type | No | myconvo |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=false and destructiveHint=false, so the description adds value by specifying the tool creates a draft (not final) and returns the sequence ready to activate. This contextualizes the write operation without being destructive. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two concise sentences with no wasted words. The first sentence front-loads the primary purpose, and the second adds a key outcome. It is appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not specify the return format beyond 'ready to activate'. It also omits details on what 'draft' entails idempotency, or side effects. While adequate for a simple tool, it leaves gaps in understanding the full behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 4 parameters with only 50% description coverage; campaign_name and campaign_type lack descriptions. The description does not mention any parameters, so it adds no semantic value beyond the schema. With low coverage, the description should compensate but fails to do so.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a draft multi-step email sequence for an existing campaign, specifying the verb 'create', the resource 'draft multi-step email sequence', and the context 'for an existing campaign'. It distinguishes itself from siblings like create_campaign (which creates a campaign) and list_sequences (which lists existing sequences) by focusing on drafting a sequence within a campaign.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the prerequisite 'for an existing campaign', indicating when to use. However, it does not explicitly state when not to use the tool or list alternatives such as create_campaign for creating a new campaign first. The usage context is implied but lacks explicit exclusions or comparisons to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
enrich_contactAInspect
Enrich a contact with verified work email, phone, LinkedIn, company details, and social profiles. Provide email or full name + company.
| Name | Required | Description | Default |
|---|---|---|---|
| No | Contact's email address | ||
| company | No | ||
| last_name | No | ||
| contact_id | No | CRM contact ID to enrich in-place | |
| first_name | No | ||
| linkedin_url | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false (mutates) and destructiveHint=false. The description adds that enrichment adds 'verified' data, but lacks details on side effects (e.g., overwriting behavior, handling of nonexistent contacts). With annotations present, the description adds moderate value beyond structured data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no fluff. The purpose is front-loaded, and every word adds value. No unnecessary repetition or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters, no output schema, and annotations present, the description covers core purpose but omits behavior on failure, enrichment success criteria, or data source. For a mutation tool, more completeness is expected.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 6 parameters with 33% description coverage. The description only mentions 'email' and 'full name + company', mapping to first_name, last_name, and company. It does not explain other params like contact_id or linkedin_url. Baseline 3 for low coverage; description compensates partially but insufficiently.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'enrich' and the resource 'contact', listing specific data fields (verified work email, phone, LinkedIn, company details, social profiles). It distinguishes from sibling tools like 'search_contacts' or 'update_contact' by focusing on enrichment.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance: 'Provide email or full name + company.' It implies when to use but does not specify when not to use (e.g., for simple updates, use 'update_contact') or offer alternatives. No explicit context on prerequisites or fallback.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
export_contactsARead-onlyInspect
Export a list of contacts to CSV format for download. Pass the contacts array from a previous search or CRM query.
| Name | Required | Description | Default |
|---|---|---|---|
| contacts | Yes | Array of contact objects to export | |
| filename | No | Output filename (without .csv) | contacts_export |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, and the description aligns with a non-destructive export operation. The description adds that it produces a downloadable CSV, which is useful beyond annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that are front-loaded and directly convey the tool's purpose and usage. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with two well-described parameters, the description is complete. It explains what the tool does and the expected input. No output schema is needed for a download operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters described. The description adds context about the source of contacts ('from a previous search or CRM query') but does not significantly add meaning beyond the schema's parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool exports contacts to CSV format for download. It distinguishes itself from siblings like 'search_contacts' and 'save_contacts_to_crm' by focusing on export rather than retrieval or storage.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly mentions that the contacts array should come from a previous search or CRM query, providing clear usage context. It does not explicitly mention when not to use or alternatives, but the context is sufficient for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_competitorsARead-onlyInspect
Find competitors of a company. Returns similar companies by industry, tech stack, and target market.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| industry | No | ||
| company_name | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the tool as read-only and non-destructive. The description adds valuable context by specifying the matching dimensions (industry, tech stack, target market), but does not mention default behavior for limit or return structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with the action and key criteria, no unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and parameter descriptions, the description is too brief. It does not explain the return format, result fields, or any ordering, which an agent needs to effectively use the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema coverage, the description should compensate by explaining parameters, but it only indirectly references company_name. It does not describe the 'limit' or 'industry' parameters, leaving the agent to infer their purpose from the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds competitors of a company and specifies similarity criteria (industry, tech stack, target market), distinguishing it from siblings like search_companies or find_contacts_at_companies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for competitive analysis but does not explicitly state when to prefer this tool over siblings or provide exclusion criteria such as 'Use search_companies for broader searches'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_contacts_at_companiesARead-onlyInspect
Find people with a specific title/role at companies matching given criteria. Credits are deducted per search. Examples: 'CTOs at funded SaaS companies', 'VPs of Engineering at AWS customers'.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| title | Yes | Job title or role to find | |
| industry | No | Industry for company discovery (Mode B) | |
| location | No | ||
| company_name | No | Specific company name (Mode A: direct exec search) | |
| company_size | No | ||
| technologies | No | ||
| hiring_growth | No | ||
| funding_status | No | e.g. funded, Series A, bootstrapped | |
| revenue_growth | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false, so the description's mention of credit deduction adds useful behavioral context beyond annotations. However, it does not disclose other aspects like result limits (e.g., max 50 via limit parameter) or potential pagination, which would be helpful.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences plus examples, extremely concise and front-loaded. Every word serves a purpose with no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 10 parameters and no output schema, the description provides a basic overview but lacks details on parameter interactions (e.g., how Mode A vs Mode B work) and expected return format. It is adequate but not comprehensive for guiding an agent's full understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 40% (only 4 of 10 parameters have descriptions). The description provides examples that hint at combining title with company_name, industry, funding_status, and technologies, but it does not explain parameters like location, company_size, hiring_growth, or revenue_growth. The examples add some value but do not fully compensate for the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds people with a specific title/role at companies matching criteria, with examples like 'CTOs at funded SaaS companies'. This distinguishes it from sibling tools like search_contacts (general contact search) and search_companies (company-only search).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions credits are deducted per search, providing cost awareness. It implies two modes via examples (Mode A: direct company search, Mode B: industry-based) but does not explicitly state when to use this tool over alternatives like search_contacts or when not to use it. More explicit guidance on mode selection and exclusions would improve this dimension.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_campaign_contentBRead-onlyInspect
AI-generate email subject lines and body content for a campaign. Describe the goal, audience, and tone.
| Name | Required | Description | Default |
|---|---|---|---|
| tone | No | e.g. professional, friendly, urgent, casual | |
| prompt | Yes | Describe the campaign goal, audience, and key message | |
| num_variants | No | Number of subject/body variants to generate | |
| campaign_type | No | myconvo (personal) or nextgen (bulk) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description says 'AI-generate' suggesting a write operation, but annotations claim readOnlyHint=true, creating a contradiction. No disclosure of side effects or limits. Annotation contradiction flagged.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action word 'generate', no waste. Efficient and to the point.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, description does not explain what the tool returns. Behavioral contradiction undermines completeness. For a generation tool, return format is critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all 4 parameters. The description adds context for the prompt parameter ('describe goal, audience, tone') but does not add value beyond schema for other parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (generate), resource (email subject lines and body content), and context (for a campaign). It distinguishes from sibling tools like create_campaign.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for generating campaign content when needing AI help with goal, audience, tone, but lacks explicit when-not-to-use or alternative tools like create_campaign or manual drafting.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_bounce_statsBRead-onlyInspect
Get email bounce statistics ā hard bounces, soft bounces, bounce rate by campaign or domain.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| campaign_id | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate safe read-only operation; description adds detail about bounce types and grouping, but misleadingly claims grouping by domain, which is unsupported in schema. This creates confusion about tool capabilities.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, efficient, but could be improved by explicitly listing parameters and clarifying the domain grouping discrepancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a stats tool with no output schema and two parameters, description is incomplete: no mention of output format, how limit works, or how to filter by domain (unsupported). Requires more detail to be self-contained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description mentions 'by campaign or domain' but only campaign_id is in schema, with no explanation of limit parameter. Schema coverage is 0%, and description fails to clarify parametersāactually misleads.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Get', resource 'email bounce statistics', and provides specific types and grouping options, distinguishing from sibling tools like get_campaign_stats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use for bounce-specific analytics by campaign or domain, but no explicit when-to-use, when-not-to-use, or alternative tool guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_campaign_statsBRead-onlyInspect
Get performance stats for a specific campaign ā opens, clicks, replies, bounce rate.
| Name | Required | Description | Default |
|---|---|---|---|
| campaign_id | No | ||
| campaign_name | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the safety profile is clear. The description adds value by specifying the exact metrics returned (opens, clicks, replies, bounce rate), going beyond the annotations. No behavioral contradictions or hidden traits (like rate limiting or pagination) are disclosed, but for a simple read-only stats endpoint, this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that immediately states the purpose and lists the key metrics. Every word adds value; there is no redundancy or extraneous information. It is perfectly concise for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 optional params, no output schema), the description covers the main purpose and metrics. However, it does not explain parameter usage (e.g., if both provided, which takes precedence?) or mention any default behavior (e.g., returns stats for a time range). Some contextual details are missing, making it adequate but not complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has two optional parameters (campaign_id, campaign_name) with 0% description coverage. The tool description does not explain how to use these parameters (e.g., are they mutually exclusive? Both required? What if both provided?). Without schema descriptions, the burden falls on the description to clarify, which it fails to do. The parameter semantics are completely opaque.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: getting performance stats (opens, clicks, replies, bounce rate) for a specific campaign. It uses a specific verb 'Get' and resource 'campaign stats', effectively distinguishing it from siblings like list_campaigns (lists all campaign names) and create_campaign (creates new campaigns).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., existing campaign), typical scenarios, or when not to use it (e.g., for aggregated stats across campaigns). Among siblings, tools like get_deals_pipeline or list_campaigns exist, but no comparisons are made.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_contact_detailsARead-onlyInspect
Get full details of a single contact by email or contact ID ā all enriched fields, activity history, and campaign membership.
| Name | Required | Description | Default |
|---|---|---|---|
| No | |||
| contact_id | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark it as read-only and non-destructive. The description adds that it returns 'all enriched fields, activity history, and campaign membership,' providing valuable behavioral context beyond annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that efficiently communicates purpose, lookup methods, and output scope. No wasted words, front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the main functionality and output, but lacks details on error handling (e.g., when both parameters are omitted or lookup fails). Given the simplicity and no output schema, it is mostly complete but could be slightly more robust.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, but the description at least indicates that both email and contact ID are intended as lookup keys. However, it does not explain parameter formats, precedence, or behavior when both are omitted. More detail would be beneficial.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves full details of a single contact by email or contact ID, listing specific output fields (enriched fields, activity history, campaign membership). This distinguishes it from sibling tools like search_contacts (list) and update_contact (modify).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (retrieve one contact's details) but does not explicitly exclude cases like no identifier provided or suggest alternatives (e.g., search_contacts for multiple contacts). However, the context is still clear enough for an agent to infer appropriate use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_credit_balanceARead-onlyInspect
Show the user's current credit balance ā how many search credits remain and have been used.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, indicating a safe read operation. The description adds the specific state being read (current balance) but provides no additional behavioral context beyond what annotations convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that immediately conveys the purpose and details. No unnecessary words, front-loaded with the key action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter read-only tool, the description is fully complete. It explains what the tool returns (remaining and used credits) without needing an output schema. The context of sibling tools and annotations further clarifies its role.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has no parameters, so the description does not need to explain parameter meaning. According to guidelines, baseline is 4 for 0 parameters, which is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'show' and the resource 'current credit balance', and specifies what details are included (remaining and used credits). It is distinct from all sibling tools, which focus on contacts, campaigns, deals, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly tells the agent to use this tool when the user's credit balance is needed. Since no other sibling tool deals with credits, the context is clear. However, it does not explicitly state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_dashboard_statsARead-onlyInspect
Show account overview stats: total contacts, active campaigns, deals in pipeline, open rate, and recent activity.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the description's main added value is listing the included stats. There is no contradiction, but the description does not disclose potential behavioral nuances like data freshness or latency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently communicates purpose and content. It is front-loaded with the verb 'Show' and immediately details the stats. No extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no inputs and no output schema, the description provides a reasonable list of output ingredients. However, it omits format details or whether all fields are always present, which could be useful but is not critical for a simple dashboard tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters and 100% schema coverage, the description adds meaning by enumerating the output fields (e.g., total contacts, open rate), which is valuable since there is no output schema. This justifies a score above the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Show' and specifies 'account overview stats' with enumerated items like total contacts, active campaigns, etc. It distinguishes this dashboard from sibling tools like get_campaign_stats or get_deals_pipeline by indicating it's an aggregated overview.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for a high-level dashboard view but does not explicitly contrast with alternatives. It lacks guidance on when to use this tool versus other stats tools like get_bounce_stats or get_unsub_stats, leaving the agent to infer based on context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_deals_pipelineARead-onlyInspect
Show the user's CRM deals pipeline with stage breakdown, values, and win rate analytics.
| Name | Required | Description | Default |
|---|---|---|---|
| status_filter | No | open | |
| include_analytics | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive behavior. The description adds that it shows stage breakdown and analytics, but does not disclose additional behavioral traits such as authentication requirements or rate limits. Value added beyond annotations is moderate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence of 15 words. It efficiently conveys the purpose without unnecessary words. Every part earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (0 required params, optional enums) and no output schema, the description hints at the output format (stages, values, win rates). It is sufficient for basic understanding but lacks parameter guidance. Overall fairly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, meaning parameters are undocumented. The description does not explain how 'status_filter' or 'include_analytics' affect the output. It mentions output content (stages, values, analytics) but not parameter effects, failing to compensate for schema gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool shows the user's CRM deals pipeline with specific content (stage breakdown, values, win rate analytics). It uses a specific verb and resource, and distinguishes from the sibling 'get_ghost_pipeline'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for viewing pipeline analytics but does not explicitly state when to use this tool versus alternatives like 'get_ghost_pipeline'. No exclusions or when-not scenarios are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ghost_pipelineARead-onlyInspect
Find stale contacts (90+ days no follow-up) that now have fresh growth signals like hiring, funding, or job changes ā warm re-engagement opportunities.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| signal_filter | No | ||
| min_days_stale | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows the tool is safe. The description adds the core behavior (finding stale contacts with signals) but does not disclose additional traits like authorization needs or rate limits. Given annotations cover safety, a score of 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no wasted words. It efficiently communicates the tool's purpose and context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple filtered list tool with three optional parameters and no output schema, the description covers the main purpose and typical use case. It could hint at return values, but the overall completeness is high given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It hints at parameter meanings: '90+ days no follow-up' for min_days_stale (default 90) and 'hiring, funding, or job changes' for signal_filter. This adds value, though explicit mappings would be better.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Find', the resource 'stale contacts', and the condition 'with fresh growth signals like hiring, funding, or job changes'. It distinguishes itself from siblings like get_icp or get_deals_pipeline by targeting a specific re-engagement use case.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear context for when to use the tool (stale contacts with new signals for warm re-engagement). However, it does not explicitly exclude alternatives or provide direct comparisons to siblings like search_contacts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_icpARead-onlyInspect
Show the user's Ideal Customer Profile (ICP) ā who they sell to, industries, titles, company size, pain points, and value prop.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true. Description adds context on what is shown but no additional behavioral details like auth needs or error cases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 20 words, front-loaded with key verb 'Show' and resource 'ICP'. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple with no params and no output schema. Description fully covers what the tool does for a read-only retrieval.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so description cannot add meaning beyond schema. Baseline for 0 params is 4, and description is adequate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool shows the user's ICP and lists specific attributes (industries, titles, etc.), distinguishing it from sibling suggest_icp which helps define ICP.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like suggest_icp. Usage is implied (view ICP), but lacks when-not-to or conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_unsub_statsBRead-onlyInspect
Get unsubscribe statistics ā unsubscribe rate, top unsubscribed campaigns, and unsubscribe trends.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| campaign_id | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark it as read-only and non-destructive. The description adds that it returns specific statistics but does not disclose other behavioral traits like aggregation scope, data freshness, or pagination behavior. With annotations covering safety, the description adds marginal value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the verb 'Get', and no wasted words. Highly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple statistics tool with two optional parameters and clear annotations, the description covers the main purpose but omits parameter documentation, leaving the tool only partially complete for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% and the description does not explain what 'limit' or 'campaign_id' parameters do. The description only lists output types, leaving the agent to guess parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets unsubscribe statistics and lists specific outputs: rate, top campaigns, and trends. It uniquely identifies the tool's purpose among siblings like get_bounce_stats and get_campaign_stats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., get_campaign_stats, get_dashboard_stats). Lacks context on prerequisites, typical use cases, or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_campaignsARead-onlyInspect
List existing email campaigns with their status, sent count, open/click rates.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| status | No | Filter by status: draft, active, completed, paused | |
| campaign_type | No | myconvo or nextgen |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate the tool is read-only and non-destructive. The description adds value by specifying the output includes status, sent count, and rates, giving the agent a behavioral expectation of what the response contains.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that is front-loaded with the core purpose and output fields. Every word earns its place; no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose and returned fields, but lacks details on pagination (though 'limit' is present), ordering, or the full response structure. Without an output schema, more detail on what the output contains would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 67%, and the description does not elaborate on the parameters beyond what the schema already provides. The baseline of 3 is appropriate as the schema carries most of the parameter context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (list), the resource (email campaigns), and what information is returned (status, sent count, open/click rates). It effectively distinguishes from siblings like 'create_campaign' (creation) and 'get_campaign_stats' (likely a single campaign's detailed stats).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage for listing campaigns with basic stats, it does not provide explicit guidance on when to use this tool versus alternatives (e.g., 'get_campaign_stats' for deeper analytics), nor does it mention any prerequisites or limitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_campaign_templatesARead-onlyInspect
List available email campaign templates (system + custom). Use for NextGen campaigns or when user says 'show templates', 'template gallery', 'use a template'. Returns templates with subject, category, and preview.
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | Optional filter: product, lead_gen, event, webinar, follow_up, newsletter |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds value beyond annotations by specifying output includes 'subject, category, and preview.' Annotations already declare readOnlyHint=true and destructiveHint=false, so no contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, action-first, no wasted words. Appropriate length for a simple list tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given single optional parameter and no output schema, description sufficiently covers return values and purpose. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter 'category' fully described. Description does not add extra meaning beyond what schema provides, so baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'List available email campaign templates (system + custom).' Verb 'list' and resource 'email campaign templates' are specific, distinguishing it from sibling tools like list_campaigns or list_forms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states use cases: 'Use for NextGen campaigns or when user says show templates, template gallery, use a template.' Provides clear context for when to invoke.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_formsARead-onlyInspect
List all lead capture forms with their submission counts and URLs.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds that it returns submission counts and URLs, providing context beyond annotations. For a zero-parameter read-only tool, this is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single concise sentence that front-loads the key information. Every word is necessary, with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description explains what the tool returns (submission counts and URLs), which is complete for a simple list operation. No gaps are present.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, and schema coverage is 100%. The baseline score for zero parameters is 4, and the description does not need to add parameter-specific information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list', the resource 'lead capture forms', and specifies what is included (submission counts and URLs). This distinguishes it from sibling tools like create_form and list_campaigns.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing forms but does not explicitly state when to use this tool versus alternatives like search functions or other list tools. No exclusions or when-not-to-use guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_landing_pagesARead-onlyInspect
List all landing pages with their view counts and public URLs.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. Description adds that the tool returns view counts and public URLs, but does not disclose any other behavioral traits (e.g., pagination, ordering, or scope).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action and resource. All information is relevant and no redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with no parameters, the description sufficiently covers what the tool does and what it returns. No output schema exists, but the description notes the return fields. Annotations cover behavioral safety.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters, so the schema provides no additional meaning. The description adds value by specifying the output fields (view counts, public URLs), which is not required but helpful.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb (List) and resource (landing pages), with specific output fields (view counts, public URLs). Distinct from sibling list tools that operate on other entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., other list tools). The description assumes the agent infers usage from the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_segmentsARead-onlyInspect
List all saved contact segments (reusable audiences) for this tenant. Use when user says 'show my segments', 'list audiences', 'what segments do I have', or when offering audience options during campaign creation.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so description carries low burden. It adds context about 'this tenant' and 'reusable audiences', enhancing understanding of scope and semantics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences achieve maximum clarity with no redundancy. First sentence states core purpose, second provides usage guidance. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter, read-only list tool, the description fully explains what it lists and when to use it. No further context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, schema coverage is 100% by default. Description properly omits parameter details as none are needed. Baseline 4 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states verb 'list', resource 'saved contact segments', and scope 'for this tenant'. Distinct from sibling tool 'apply_segment_to_campaign' which uses segments but is not a list operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides concrete user phrases triggering this tool ('show my segments', 'list audiences', etc.) and a specific use case 'during campaign creation'. No need for when-not guidance given simplicity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sequencesARead-onlyInspect
List all email sequences (multi-step drip campaigns) with their status and step counts.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| status | No | Filter by status: draft, active, paused, completed |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive behavior. Description adds that it returns status and step counts, but does not disclose ordering, pagination behavior, or potential empty results. With readOnlyHint=true, the burden is lower, but additional context would improve transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 14 words, front-loaded with the verb and resource, no redundant information. Very efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple listing tool with two optional parameters and no output schema, the description covers the basic purpose but omits return format details (e.g., whether results are paginated, sorted, or contain additional fields). It is functional but not fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description does not mention any input parameters. The schema describes status param with a description, but limit (default 20) lacks any description. Since schema coverage is only 50% and description adds nothing for parameters, the agent receives minimal guidance beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool lists email sequences (multi-step drip campaigns) and what information it returns (status, step counts). The verb 'list' and resource 'email sequences' are specific, and it distinguishes itself from siblings like list_campaigns and list_forms by focusing on sequences.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. While the name and sibling tools imply usage for querying sequences, the description does not provide context like 'use this to see available sequences before starting a campaign' or mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
run_ai_reportARead-onlyInspect
Run an AI-powered analytics report on any CRM data. Ask any question in plain English ā contacts, campaigns, deals, credits, bounces, or unsubscribes. Returns a data table with numbers. Use for: 'Show contacts by industry', 'Top campaigns by open rate', 'Deal pipeline value by stage', 'Credit usage this month', 'Bounce rate by domain', 'Contacts added this week', 'Campaign performance comparison', 'Sequence step funnel', 'Win rate by deal source'.
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Analytics question in plain English |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false, so the description doesn't need to restate safety. The description adds value by specifying that the tool 'Returns a data table with numbers', which gives the agent understanding of the output format beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (3 sentences), front-loaded with the primary purpose, and includes a well-organized list of examples. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, no output schema, clear annotations), the description is fully complete. It explains what the tool does, how to use it (plain English questions), what it returns, and provides diverse examples, leaving no ambiguity for agent selection or invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers the single parameter 'question' with a brief description, but the tool description adds substantial meaning through example questions (e.g., 'Show contacts by industry', 'Top campaigns by open rate'), illustrating the breadth of possible inputs beyond the schema's minimal text.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Run an AI-powered analytics report' and specifies the resource 'any CRM data'. It provides numerous example questions covering various domains (contacts, campaigns, deals, etc.), which distinguishes it from more specific sibling tools like get_campaign_stats or get_bounce_stats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives explicit example questions that cover a wide range of use cases, implying the tool is for ad-hoc analytics. While it doesn't explicitly state when not to use it or list alternatives, the examples provide clear context for when this general tool is appropriate versus more specific siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_campaign_templateAInspect
Save the current campaign content as a reusable template for future campaigns. Use when user says 'save this as template', 'create new template'.
| Name | Required | Description | Default |
|---|---|---|---|
| body | Yes | Email body (HTML or plain text) | |
| name | Yes | Template name | |
| subject | Yes | Email subject line | |
| category | No | Category: product, lead_gen, event, webinar, follow_up, newsletter |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate a mutation (readOnlyHint=false) and non-destructive action (destructiveHint=false). The description reinforces that it saves template content but does not elaborate on behaviors such as whether it overwrites existing templates or requires specific permissions. With annotations present, the description adds marginal value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no filler. Front-loaded with the main action, then usage examples. Every word is purposeful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (4 uncomplicated parameters, no output schema), the description covers the core functionality and usage. It could briefly note that 'category' is optional, but overall it is complete enough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with descriptions for all four parameters. The description mentions 'current campaign content' but does not add extra meaning beyond the schema, such as clarifying the optional nature of 'category' or providing format constraints. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: saving current campaign content as a reusable template. It uses a specific verb ('Save') and resource ('campaign content as a reusable template'), and distinguishes from sibling tools like list_campaign_templates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage examples ('save this as template', 'create new template'), giving clear guidance on when to invoke the tool. It lacks explicit when-not or alternatives, but the examples suffice for basic differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_contacts_to_crmAInspect
Save contacts from a search result into the user's CRM. Pass the contacts array from search_contacts or find_contacts_at_companies.
| Name | Required | Description | Default |
|---|---|---|---|
| contacts | Yes | Array of contact objects to save |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate non-readonly and non-destructive. The description confirms a mutation without contradiction. No additional behavioral context is provided (e.g., authentication, duplicates handling).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no extraneous words. The key information is front-loaded and immediately actionable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter tool with full schema coverage and consistent annotations, the description provides sufficient context. It could mention whether the operation is insert-only or updates existing records, but not necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema describes the 'contacts' parameter as an array of objects. The description adds value by specifying that it should come from specific search tools, which aids correct usage beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (save), resource (contacts into CRM), and specifies the input source (from search_contacts or find_contacts_at_companies), distinguishing it from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use this tool: after a contact search. It names the preceding tools (search_contacts, find_contacts_at_companies), but does not mention when not to use or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scan_deal_intentARead-onlyInspect
Scan emails and campaigns for contacts showing buying intent. Returns ranked intent signals.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds that it returns ranked intent signals, which is behavioral context beyond the readOnlyHint and destructiveHint annotations. However, it does not detail side effects, data sources, or processing time, which would improve transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no superfluous text. The most critical information (scanning for intent and returning ranked signals) is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the core functionality without output schema. It could elaborate on the return format but remains adequate for a zero-parameter read-only tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters and 100% coverage, so schema alone is sufficient. The description adds no extra parameter info but a baseline of 4 is appropriate given no parameters exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it scans emails and campaigns for buying intent and returns ranked signals, using a specific verb and resource. It distinguishes from sibling tools like create_campaign or search_contacts, which have different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, such as search_contacts or get_deals_pipeline. The description lacks context about typical use cases or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_companiesARead-onlyInspect
Search for companies by industry, tech stack, funding status, or growth signals. Examples: 'funded SaaS companies in NY', 'companies using Salesforce with 100+ employees'.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | ||
| industry | No | ||
| location | No | ||
| company_size | No | ||
| technologies | No | ||
| hiring_growth | No | ||
| funding_status | No | ||
| revenue_growth | No | ||
| product_launches | No | ||
| recent_acquisitions | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (readOnlyHint=true, destructiveHint=false) already indicate safe read operation. Description adds no further behavioral traits like rate limits or pagination. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus examples, no fluff. Front-loaded with purpose and supported by practical examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, 11 parameters. Description covers main use cases but omits many parameters like limit, company_size, revenue_growth. Return format unknown. Adequate for a common search tool but incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, description should explain parameters. It lists some filter categories but does not map to schema properties or clarify values. Examples provide partial guidance but insufficient for 11 parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'search' and resource 'companies', with specific filters (industry, tech stack, funding, growth). Examples reinforce purpose and differentiate from sibling tools like search_contacts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Examples give usage context but no explicit when-to-use vs alternatives. No guidance on when not to use or preference over siblings like find_contacts_at_companies.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_contactsARead-onlyInspect
Search for contacts/people by name, title, company, or natural language query. ALWAYS call this tool immediately when the user asks to find, search, or show people/contacts. Credits are deducted per search. Examples: 'CTOs in fintech', 'John Smith at Google', 'VPs of Sales at SaaS startups'. Default limit is 50 results.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | Natural language search query |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false. The description adds valuable behavioral context: credits deducted per search and that it supports natural language queries. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus examples, no redundancy. Purpose is front-loaded. Every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so return values are not explained. The description is adequate for a simple search tool but lacks guidance on result format or pagination. Sibling tools exist, but no comparative context provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (only query documented). The description adds context to the query parameter with examples but does not mention the limit parameter. Baseline 3 is appropriate given partial coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches for contacts/people and lists search criteria (name, title, company, natural language). Examples illustrate usage. However, it does not explicitly differentiate from sibling tools like search_crm_contacts or find_contacts_at_companies, which reduces clarity slightly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It mentions credit deduction but provides no guidance on when to use this tool over alternatives (e.g., search_crm_contacts, find_contacts_at_companies). No when-not-to-use or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_crm_contactsBRead-onlyInspect
Search contacts already in the user's CRM. ALWAYS call this tool immediately when the user asks to show, list, or retrieve their contacts/people. Call with empty parameters {} to return all contacts (up to 50). Includes enriched fields: company, title, industry, city, country, LinkedIn, skills, lifecycle stage, score. No credits deducted ā reads from CRM only.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | ||
| limit | No | ||
| query | No | Free-text search across name/email/company/title | |
| skills | No | ||
| company | No | ||
| country | No | ||
| industry | No | ||
| job_title | No | ||
| contact_status | No | ||
| exclude_bounced | No | ||
| lifecycle_stage | No | e.g. lead, prospect, customer | |
| exclude_unsubscribed | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive behavior. The description adds that it reads from CRM only and no credits deducted, and lists output fields but does not elaborate on other behaviors like filter combination or pagination.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured, with two sentences that convey purpose and key traits without unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 12 parameters and no output schema, the description provides basic context but lacks information on filter interactions, default behavior, or result handling, leaving gaps for an agent to infer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With only 17% schema description coverage, the description does not add meaning to the 10 undocumented parameters. It lists output fields but not input parameter details, failing to compensate for the low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches contacts already in the user's CRM and lists enriched fields. However, it does not explicitly differentiate from the sibling tool 'search_contacts', leaving the distinction ambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'No credits deducted' implying a safe usage context, but it does not provide explicit guidance on when to use this tool versus alternatives like 'search_contacts' or 'search_companies'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_investorsCRead-onlyInspect
Find VCs and angel investors by stage, sector, or geography. Credits are deducted per search.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| title | No | investor | |
| industry | No | ||
| location | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive behavior. The description adds 'Credits are deducted per search', which is useful cost context not in annotations. However, it lacks details on pagination, rate limits, or what the response contains.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no filler. The first sentence states the core purpose, the second adds a critical behavioral note about credits. Efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and four parameters with zero schema descriptions, the description is insufficient. It omits details about return format, pagination, and does not fully explain all filter options. The missing 'stage' parameter is a gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description should compensate but only loosely maps 'sector' to 'industry' and 'geography' to 'location'. The 'title' and 'limit' parameters are not explained, and the 'stage' mentioned is not a parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Find VCs and angel investors' with filtering by stage, sector, and geography, but the schema lacks a 'stage' parameter. The verb 'Find' and resource are clear, but the mismatch between claimed and available filters reduces clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus siblings like search_companies or search_contacts. The description does not mention prerequisites, alternatives, or when-not-to-use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
show_suggestionsBRead-onlyInspect
Display a menu of common xmagnet actions ā search contacts, find companies, view pipeline, manage campaigns, and more.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only behavior. The description adds that it displays a menu, which is consistent. It does not add further behavioral details (e.g., whether it returns options for user selection).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded, with clear examples. Every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description is adequate but could be improved by stating that this is for guiding user interaction or that it provides a pick list.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, and schema coverage is 100%. The description adds context that the menu lists common actions, but does not need to explain parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it displays a menu of common xmagnet actions and lists examples. It is a specific verb+resource, though it could better differentiate from siblings by noting it's a navigational helper.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. It does not mention when not to use it or suggest other tools for specific tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
suggest_icpARead-onlyInspect
Analyze the user's account and generate an ICP suggestion based on their website, existing contacts, and campaign history.
| Name | Required | Description | Default |
|---|---|---|---|
| website | No | Company website URL | |
| description | No | What the company sells |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only (readOnlyHint=true) and non-destructive (destructiveHint=false). The description adds value by revealing it uses account data beyond provided inputs (contacts, campaign history), which is useful behavioral context not in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that clearly states the action and inputs. It is concise and front-loaded with the verb 'Analyze'. However, it could be slightly more structured by separating the action from the data sources.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose and inputs but does not mention what the output (ICP suggestion) looks like or its format. Since there is no output schema, this information would be helpful for completeness. It is adequate but not fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for both parameters (website, description). The description adds no additional meaning beyond what the schema already provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool analyzes the user's account and generates an ICP suggestion using website, contacts, and campaign history. It distinguishes from sibling tools like get_icp (which likely retrieves existing ICP) by focusing on generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context (uses website, existing contacts, campaign history) but does not explicitly state when to use this tool vs alternatives like get_icp or show_suggestions. No exclusion criteria are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_contactCInspect
Update a contact's fields in the CRM ā job title, company, lifecycle stage, status, notes, etc.
| Name | Required | Description | Default |
|---|---|---|---|
| No | |||
| notes | No | ||
| phone | No | ||
| company | No | ||
| job_title | No | ||
| last_name | No | ||
| contact_id | Yes | ||
| first_name | No | ||
| linkedin_url | No | ||
| contact_status | No | ||
| lifecycle_stage | No | e.g. lead, prospect, customer, churned |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a write operation (readOnlyHint=false) and not destructive (destructiveHint=false). The description adds no extra behavioral context, such as whether updates are partial or full, whether the updated contact is returned, or any authentication/rate limit details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence (14 words), concise and front-loaded with the core action. However, it is so brief that it sacrifices explanatory value, making it adequate but not well-structured for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 11 parameters and no output schema, the description should provide more context about update behavior, required fields, and response structure. The current description is insufficient for an agent to invoke the tool correctly without guessing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 9% (only lifecycle_stage has a description). The description lists a few fields but adds no meaning beyond their names. It does not explain edge cases, default values, or update semantics for the 11 parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Update' and the resource 'a contact's fields in the CRM', listing example fields like job title, company, lifecycle stage. It distinguishes itself from sibling tools like create_campaign or search_contacts, as no other update tool exists among siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives or prerequisites. It does not mention that contact_id is required or when to prefer this over other contact-modifying tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_emailARead-onlyInspect
Validate whether one or more email addresses are deliverable. For a single email use 'email'. For bulk (up to 50) use 'emails' array. Checks MX records, identifies disposable/role addresses.
| Name | Required | Description | Default |
|---|---|---|---|
| No | Single email address to validate | ||
| emails | No | List of email addresses for bulk validation (up to 50) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, meaning no side effects. The description adds behavioral context by specifying the types of validation performed (deliverability, disposable/role detection, MX checks), which helps the agent understand the tool's scope beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no redundant information. Every part contributes to understanding the tool's functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With only one parameter and no output schema, the description provides sufficient information about the tool's purpose and behavior. A minor gap is the lack of return format details, but since there is no output schema, this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single parameter 'email', so the description does not add new meaning. Baseline score of 3 is appropriate since the schema already fully documents the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool validates email deliverability, identifies disposable/role addresses, and checks MX records. This is a specific verb+resource combination, and it distinguishes itself from sibling tools (e.g., analyze_contacts, enrich_contact) that do not perform email validation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is used when email validation is needed, but it does not explicitly state when to use it vs. alternatives nor when not to use it. There are no sibling tools with overlapping functionality, so the lack of explicit guidance is a minor gap.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail ā every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control ā enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management ā store and rotate API keys and OAuth tokens in one place
Change alerts ā get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption ā public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics ā see which tools are being used most, helping you prioritize development and documentation
Direct user feedback ā users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!