Skip to main content
Glama
lzinga

US Government Open Data MCP

fec_search_committees

Search for political committees by name, state, or type to identify PACs, campaign committees, and party committees for campaign finance investigations.

Instructions

Search for political committees (PACs, campaign committees, party committees) by name, state, or type. CRITICAL for investigations: Use committee_type='Q' (Qualified PAC) + name='Company Name' to find corporate PAC IDs. Example: name='Wells Fargo', committee_type='Q' returns C00034595 (Wells Fargo Employee PAC). Then use fec_committee_disbursements with the committee_id to trace money to specific politicians.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameNoCommittee name to search for
stateNoTwo-letter state code
committee_typeNoCommittee type: 'P' (Presidential), 'H' (House), 'S' (Senate), 'N' (PAC - Nonqualified), 'Q' (PAC - Qualified), 'X' (Party - Nonqualified), 'Y' (Party - Qualified), 'I' (Independent Expenditor), 'O' (Super PAC)
cycleNoTwo-year election cycle, e.g. 2024
pageNoPage number (default: 1)
per_pageNoResults per page (default: 20)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It effectively discloses key behavioral traits: it's a search tool (implying read-only, non-destructive), provides a concrete example of expected output (returns C00034595), and hints at pagination behavior through the example parameters. However, it doesn't explicitly mention rate limits, authentication needs, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with three sentences. The first sentence states the purpose, the second provides critical usage guidance with an example, and the third connects to a sibling tool. Each sentence earns its place, though the example could be slightly more concise. It's front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and 6 parameters with full schema coverage, the description does well: it explains the tool's purpose, provides usage guidance with an example, and links to a sibling tool. However, it doesn't describe the return format or pagination behavior, which would be helpful given the lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description adds minimal value beyond the schema: it mentions searching by 'name, state, or type' (which the schema covers) and provides an example with 'committee_type='Q'' and 'name='Wells Fargo''. This reinforces but doesn't significantly expand on schema information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for political committees by specific criteria (name, state, type), explicitly lists the resource (PACs, campaign committees, party committees), and distinguishes itself from siblings by focusing on FEC data rather than other datasets like BEA or BLS. The verb 'Search' is specific and action-oriented.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'CRITICAL for investigations' with a specific example (committee_type='Q' + name='Company Name' to find corporate PAC IDs). It also names an alternative tool (fec_committee_disbursements) for the next step in a workflow, clearly indicating a sequence of operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lzinga/us-government-open-data-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server