Skip to main content
Glama

intentweave-mcp

Server Details

Hosted MCP for verified B2B lead campaigns with success-only billing.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.9/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct action and resource: cost estimation, lead export, campaign reading, lead fetching, vertical listing, campaign creation, and feedback submission. No overlapping purposes.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern in snake_case (e.g., estimate_cost, get_campaign). No deviations or mixed conventions.

Tool Count5/5

With 7 tools, the set is well-scoped for a campaign management server. Each tool serves a clear purpose without redundancy or clutter.

Completeness3/5

The tool surface covers core actions (create, read) but lacks update and delete operations for campaigns and leads. Feedback and export are bonuses, but the absence of lifecycle management is a notable gap.

Available Tools

7 tools
estimate_costAInspect

Estimate run cost before campaign launch (model-based estimate, not billable).

ParametersJSON Schema
NameRequiredDescriptionDefault
countNo
intentYes
verticalYes
access_tokenNo
tenant_api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so description is the sole source for behavioral traits. It discloses that the estimate is 'model-based' and 'not billable', which is helpful. However, it does not mention authentication requirements, rate limits, or other side effects. Adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with key action, no unnecessary words. Extremely concise and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists, explanation of return values is not needed. The description covers purpose and a key behavioral trait. However, missing parameter semantics reduces completeness. Barely adequate for a tool with 5 parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning the description adds no parameter-level details. With 5 parameters (2 required), the description should compensate but does not explain 'intent', 'vertical', 'count', or other parameters. This is a significant gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action ('Estimate'), resource ('run cost'), and context ('before campaign launch'). It also distinguishes from siblings by noting it's a model-based, non-billable estimate, which is different from 'run_campaign' that actually executes the campaign.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description provides timing guidance ('before campaign launch') but does not explicitly mention when not to use this tool or alternative tools. It implies usage context but lacks explicit exclusions or comparisons to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

export_leadsCInspect

Export campaign leads as JSON or CSV payload.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
formatNojson
campaign_idYes
access_tokenNo
tenant_api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavioral traits. It only says 'export' which implies read-only, but doesn't confirm destructive potential, permission needs, rate limits, or any side effects. The description is too minimal for good transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence with no wasted words. However, it could be improved by adding more context without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 parameters, no annotations, and an output schema, the description is incomplete. It lacks details on authentication parameters (access_token, tenant_api_key), pagination (limit), or the meaning of 'payload'. The output schema helps but still leaves gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% (no parameter descriptions). The description mentions 'JSON or CSV' which hints at the 'format' parameter but doesn't explain 'limit', 'access_token', 'tenant_api_key', or 'campaign_id'. It adds minimal meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'export' and the resource 'campaign leads', and specifies output formats (JSON or CSV). This distinguishes it from sibling tools like get_lead (single lead) and get_campaign (campaign details).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool instead of alternatives (e.g., get_lead for single leads). It doesn't mention prerequisites, scenarios, or any exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_campaignBInspect

Read campaign status with optional leads + scorecard (progressive disclosure).

ParametersJSON Schema
NameRequiredDescriptionDefault
campaign_idYes
leads_limitNo
access_tokenNo
include_leadsNo
tenant_api_keyNo
include_scorecardNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates a read operation via 'Read', which implies idempotency and safety, but lacks details on authentication, error behavior, or the meaning of 'progressive disclosure' – adequate but not thorough given the absence of annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single front-loaded sentence with no redundancy; every word adds value and the structure is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 6 parameters, 0% schema descriptions, and no annotation coverage, the description is too brief; it lacks details on required vs optional fields, defaults, and the nature of 'progressive disclosure' – incomplete for effective tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description only hints at two optional parameters (leads, scorecard), neglecting the other four parameters (campaign_id, leads_limit, access_token, tenant_api_key) and their roles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Read campaign status' with a specific resource and verb, and distinguishes from sibling tools like get_lead or export_leads by mentioning optional leads and scorecard inclusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives such as get_lead or run_campaign; the description only implies usage for reading a campaign with optional data, but offers no conditions or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_leadCInspect

Fetch one lead with evidence by scanning campaign leads.

ParametersJSON Schema
NameRequiredDescriptionDefault
lead_idYes
campaign_idYes
access_tokenNo
tenant_api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description says 'by scanning campaign leads' but doesn't explain scanning behavior, side effects, or safety. Fails to disclose performance or permission implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single sentence, no fluff, but at the cost of omitting critical details. Could expand to cover parameters and usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Output schema exists (not shown), so return values unclear. Missing parameter descriptions, usage context, and behavioral details. Incomplete for a 4-param tool with no parameter docs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%. Description only implies campaign_id and lead_id (required), but offers no detail on access_token or tenant_api_key. Parameters lack added meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Fetch one lead' with campaign_id and lead_id, distinguishing from sibling tools like export_leads (bulk) and get_campaign (different entity). 'with evidence' adds some specificity but could be clearer.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., export_leads for multiple leads). No mention of prerequisites or context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_verticalsCInspect

List currently exposed vertical slugs.

ParametersJSON Schema
NameRequiredDescriptionDefault
access_tokenNo
tenant_api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral traits. It only indicates a read-only listing ('currently exposed') but omits authentication requirements, rate limits, or other side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence but is under-specified given the absence of parameter details and annotations. Conciseness is not synonymous with incompleteness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Although an output schema exists, the description does not explain return values or behavior. Given no annotations and minimal description, the tool is insufficiently documented for an agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description does not mention any parameters. It adds no meaning beyond the raw schema, which lacks descriptions for both access_token and tenant_api_key.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists currently exposed vertical slugs, with a specific verb and resource. It is distinct from sibling tools which focus on campaigns, leads, cost, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. No prerequisites, exclusions, or usage context provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

run_campaignCInspect

Create a campaign and return campaign_id + first telemetry snapshot.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNo
intentYes
optionsNo
verticalNoyc_devtools
access_tokenNo
tenant_api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description must disclose behavioral traits but only mentions creation and return values. No info on side effects, permissions, rate limits, or idempotency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. Very concise, but at the expense of essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having an output schema, description lacks parameter guidance and usage context. Incomplete for a creation tool with six parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% and description adds no parameter details. Six parameters including `intent`, `count`, `options` are completely unaddressed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it creates a campaign and returns campaign_id and first telemetry snapshot. This verb+resource pairing distinguishes it from sibling tools like get_campaign or estimate_cost.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like estimate_cost or get_campaign. Lacks context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_feedbackCInspect

Submit user feedback for model improvement loop.

ParametersJSON Schema
NameRequiredDescriptionDefault
noteNo
ratingYes
lead_idYes
campaign_idNo
access_tokenNo
tenant_api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description bears full responsibility. It only states the action ('submit') without disclosing side effects (e.g., storage duration, visibility, idempotency). This is insufficient for a write operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no unnecessary words, but it sacrifices informativeness for brevity. It is adequately concise but not optimally structured – a longer description would improve clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters, no annotations, and an output schema (which covers return values but not behavior), the description fails to provide enough context for correct tool invocation. The minimal text leaves major gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema coverage is 0%, meaning no parameter descriptions exist in the schema. The description adds no information about the 6 parameters, forcing the agent to infer meanings from names alone, which is risky for fields like campaign_id or tenant_api_key.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'submit' and the resource 'user feedback' with a specific goal ('for model improvement loop'), distinguishing it from siblings like get_lead or export_leads. However, it could be more specific about the nature of the feedback.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool vs. alternatives, nor any prerequisites or context (e.g., feedback collection timing). The description is entirely absent of usage cues.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources