Skip to main content
Glama
lzinga

US Government Open Data MCP

cfpb_complaint_trends

Read-only

Analyze consumer complaint trends over time using CFPB data to identify patterns in financial products, issues, and companies.

Instructions

Get complaint trends over time using the CFPB Trends API. Uses dedicated /trends endpoint with lens-based aggregation. REQUIRED: trend_interval ('month', 'quarter', or 'year') — the API rejects requests without it. Lens options: 'overview' (total counts), 'product' (by product), 'issue' (by issue), 'tags' (by tag). Sub-lens allows drilling into sub-categories within the lens.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
lensNoTrend lens (default: overview)
trend_intervalYesTime bucket size for trend aggregation: 'month', 'quarter', or 'year'
sub_lensNoSub-lens drill-down
sub_lens_depthNoTop N sub-aggregations to return (default 10)
focusNoFocus charts on a specific product or company name
productNoFinancial product: 'Mortgage', 'Debt collection', etc.
companyNoCompany name: 'Wells Fargo', 'Equifax', etc.
stateNoTwo-letter state code: 'CA', 'TX', 'NY'
issueNoIssue type filter
date_received_minNoStart date (YYYY-MM-DD)
date_received_maxNoEnd date (YYYY-MM-DD)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, which the description aligns with by describing a data retrieval operation ('Get'). The description adds useful context beyond annotations, such as mentioning the dedicated /trends endpoint and that the API rejects requests without trend_interval, but it does not detail behavioral aspects like rate limits, authentication needs, or error handling, which are not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by essential usage details in a structured manner. Each sentence earns its place by clarifying requirements, lens options, and sub-lens functionality without unnecessary elaboration, making it highly efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity with 11 parameters and no output schema, the description is largely complete for guiding usage, as it explains key concepts like lens aggregation and mandatory parameters. However, it could be more complete by mentioning typical output formats or examples, which would help compensate for the lack of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all parameters thoroughly. The description adds value by explaining the purpose of lens options (e.g., 'overview' for total counts) and the role of sub-lens for drilling down, providing semantic context that complements the schema's technical details without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get complaint trends over time using the CFPB Trends API.' It specifies the verb ('Get'), resource ('complaint trends'), and method ('using the CFPB Trends API'), distinguishing it from sibling tools like cfpb_complaint_aggregations or cfpb_search_complaints by focusing on temporal trends rather than static aggregations or search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage by explaining the lens-based aggregation system and specifying that trend_interval is REQUIRED, which helps guide when to use this tool. However, it does not explicitly state when not to use it or name alternative tools for different types of complaint data, such as cfpb_complaint_detail for individual complaints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lzinga/us-government-open-data-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server