Skip to main content
Glama
pghdma

CallRail MCP

spam_detector

Identify likely-spam calls using heuristic scoring based on short duration, unanswered status, and repeat caller patterns. Optionally tag flagged calls for manual review. Returns a score breakdown and caller phone histogram.

Instructions

Heuristically identify likely-spam calls and (optionally) tag them.

Spam scoring (additive): +2 if duration < 10 seconds +1 if not answered +1 if first_call AND duration < 30 seconds +1 if same caller appears >=3 times in window (likely auto-dialer) A call scoring >= 3 is flagged as likely spam.

Args: company_id: Restrict to one company (recommended). days: Lookback window (1-90; 90 is hard-capped to avoid memory blowup on high-volume clients — full call list is materialized for scoring before truncating the response). auto_tag: If True, ADD tag_name to each likely-spam call after the scan. Default False (preview only). Note: we deliberately do NOT mark calls as spam=True automatically — CallRail HIDES spam-flagged calls from default GET endpoints, so self-reviewing them later becomes painful. Tag first, manually spam-flag if confirmed. tag_name: The tag to add when auto_tag=True. Default 'auto_detected_spam'. Auto-creates the tag at company level if it doesn't exist (CallRail's behavior). account_id: Auto-resolves if omitted.

Returns: - score breakdown by call - histogram of caller phone numbers (so you can spot a single dialer hammering you) - if auto_tag: count tagged + failures

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
company_idNo
daysNo
auto_tagNo
tag_nameNoauto_detected_spam
account_idNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully bears the burden of behavioral disclosure. It explains the additive scoring algorithm, memory constraints (days cap of 90, materialization), and the deliberate decision not to mark calls as spam=True due to CallRail's hiding behavior, offering rich context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections for scoring, arguments, and returns. It is informative but slightly lengthy; some parts like the scoring formula could be more compact, but overall it is efficiently organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (heuristic scoring, optional tagging with side effects, memory concerns), the description covers all necessary aspects: purpose, algorithm, parameter details, return structure, and important caveats. It is comprehensive and addresses potential pitfalls.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0% description coverage, but the description provides detailed semantics for all 5 parameters: company_id (recommended), days (lookback, cap, memory warning), auto_tag (behavior, default), tag_name (default, auto-creation), account_id (auto-resolves). This fully compensates for the schema lack.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool identifies likely-spam calls and optionally tags them. It uses specific verbs and resources, but does not explicitly differentiate from sibling tools like list_calls or call_summary, leaving some implicit comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some guidance, such as recommending company_id and explaining the auto_tag behavior, but lacks explicit instructions on when to use this tool versus alternatives, and does not mention when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pghdma/callrail-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server