Skip to main content
Glama
sh-patterson

fec-mcp-server

get_receipts

Retrieve itemized campaign contributions to analyze donor patterns, filter by amount or contributor type, and classify PAC donations for campaign finance research.

Instructions

Retrieve itemized contributions (Schedule A) received by a campaign committee. Shows individual and organizational donors, amounts, and contributor details. Automatically classifies PAC contributions by type (Corporate, Labor, Trade, Leadership PAC) for deeper analysis. Supports filtering by amount threshold for researching significant contributions and campaign finance patterns.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
committee_idYesFEC committee ID (e.g., "C00401224")
min_amountNoMinimum contribution amount to filter (default: $1,000)
two_year_transaction_periodNoTwo-year period (e.g., 2024 covers 2023-2024).
cycleNoAlias for two_year_transaction_period to align with finance cycle filters.
contributor_typeNoFilter by contributor type: "individual" or "committee" (PAC)
include_notableNoInclude flagged-first notable analysis block in output (default: true)
fuzzy_thresholdNoFuzzy match confidence threshold for reference list matching (default: 90)
limitNoNumber of results to return (default: 20, max: 100)
sort_byNoSort results by "amount" (descending) or "date" (most recent first)amount
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It describes automatic classification of PAC contributions and filtering capabilities, but does not disclose behavioral traits like rate limits, authentication needs, pagination, or error handling. It adds some context but leaves gaps for a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with core purpose and efficiently structured into three sentences covering retrieval, classification, and filtering. It avoids redundancy but could be slightly tighter by combining some concepts.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read tool with 9 parameters, 100% schema coverage, and no output schema, the description adequately covers purpose and some features. However, it lacks details on return format, error cases, or performance expectations, leaving room for improvement in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds marginal value by implying filtering by amount threshold and mentioning PAC classification, but does not provide additional semantics beyond what the schema specifies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves itemized contributions (Schedule A) received by a campaign committee, specifying the resource (contributions/donors) and key details like donor types and amounts. It distinguishes from siblings by focusing on receipts rather than finances, flags, disbursements, or searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions filtering by amount threshold and researching patterns, but provides no explicit guidance on when to use this tool versus alternatives like get_committee_finances or search_donors. It lacks context on prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sh-patterson/fec-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server