Skip to main content
Glama
pghdma

CallRail MCP

usage_summary

Calculate per-company cost attribution and minutes for the current billing cycle to identify the biggest cost drivers and guide client renegotiation, upsell, or drop decisions.

Instructions

Per-company cost-attribution summary for the current cycle.

Aggregates active trackers + per-company call minutes and projects what each client is contributing to the agency's CallRail bill. Useful for:

  • Deciding which client to renegotiate / upsell / drop

  • Sanity-checking the upcoming invoice

  • Quarterly reviews

Pricing assumes Call Tracking Starter ($50 base + 5 numbers + 250 mins bundled; $3/local number, $5/toll-free number, $0.05/local minute, $0.08/toll-free minute over bundle). Edit PRICING_* constants in server.py if you're on a different plan.

Args: account_id: Auto-resolves if omitted. days: Lookback window in days (default 30 = roughly one cycle). Ignored if start_date provided. start_date: 'YYYY-MM-DD'. end_date: 'YYYY-MM-DD' (defaults to today).

Returns: - agency: plan + totals + bundle utilization + cycle estimate - by_company[]: each company's minutes, active numbers, cost share (sorted by cost-share descending) - biggest_cost_driver: name of top company - partial_failures[]: per-company API errors. Each entry carries partial_calls_before_failure, partial_minutes_before_failure, partial_local_numbers, partial_tollfree_numbers so an under- reporting agency_total is observable, not silent. - notes: caveats about the cost model (toll-free minute pricing not yet differentiated; SMS not included).

Cost shares sum exactly to agency.estimated_cycle_total via largest- remainder rounding.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
account_idNo
daysNo
start_dateNo
end_dateNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavior: it's a read-only summary, aggregates data, assumes specific pricing, and provides detailed return fields including partial failures and rounding. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured: one-line summary, then aggregation details, use cases, pricing, parameter docs, and return format. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no annotations, the description covers purpose, parameters, returns, edge cases (partial failures), assumptions, and caveats. It is self-contained and sufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, but the description explains each parameter's meaning and default behavior (e.g., account_id auto-resolves, days ignored if start_date provided). This exceeds structural schema info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Per-company cost-attribution summary for the current cycle', specifying verb and resource. It differentiates from sibling tools like call_summary by focusing on cost allocation rather than raw call data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases (client renegotiation, invoice sanity-check, quarterly reviews) but does not mention when not to use or list alternative tools. Context is clear but exclusions are missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pghdma/callrail-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server