openfec-mcp-server
Server Details
Access FEC campaign finance data. Query data about candidates, money trails, and election filings.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- cyanheads/openfec-mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 9 of 9 tools scored. Lowest: 3.4/5.
Each tool has a clearly distinct purpose targeting specific FEC data types (calendar, elections, candidates, committees, contributions, disbursements, expenditures, filings, legal), with no overlap in functionality. The descriptions explicitly differentiate the resources and use cases, making tool selection unambiguous.
All tools follow a consistent 'openfec_verb_noun' pattern (e.g., openfec_lookup_calendar, openfec_search_candidates), using snake_case throughout. The verbs 'lookup' and 'search' are applied appropriately based on the action (lookup for reference data, search for queryable data), maintaining a predictable naming convention.
With 9 tools, the server is well-scoped for the FEC data domain, covering key areas like candidates, committees, financial transactions, and legal documents. Each tool earns its place by addressing a distinct aspect of campaign finance data, avoiding bloat while providing comprehensive coverage.
The tool set offers complete coverage of the FEC domain, including CRUD-like operations (search/lookup for all major data types), financial summaries, and legal documents. There are no obvious gaps; agents can access all essential campaign finance information without dead ends, supporting workflows like tracking funding, spending, and compliance.
Available Tools
9 toolsopenfec_lookup_calendarOpenfec Lookup CalendarBRead-onlyIdempotentInspect
Look up FEC calendar events, filing deadlines, and election dates.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | events = FEC calendar events. filing_deadlines = report due dates. election_dates = upcoming/past elections. | events |
| page | No | Page number (1-indexed). Default 1. | |
| state | No | Two-letter state code (e.g., AZ, CA). Primarily for election_dates mode. | |
| office | No | Office sought (H=House, S=Senate, P=President). Election dates mode. | |
| category | No | Calendar category ID for events mode. Common values: "32" (reporting deadlines), "33" (election dates), "34" (quarterly filings). Events mode only. | |
| max_date | No | Latest date (YYYY-MM-DD). | |
| min_date | No | Earliest date (YYYY-MM-DD). | |
| per_page | No | Results per page. Default 20, max 100. | |
| description | No | Full-text event description search. Events mode. | |
| report_type | No | Report type code (e.g. "Q1", "Q2"). Filing deadlines mode only. | |
| report_year | No | Report year. Filing deadlines mode. | |
| election_year | No | Election year. Election dates mode. |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes | Calendar records — events, filing deadlines, or election dates depending on mode. |
| pagination | Yes | Page-based pagination metadata. |
| search_criteria | No | Echo of the search filters that produced this result set. Populated when results are empty to help diagnose why nothing matched. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, so the agent knows this is a safe, repeatable read operation. The description adds no behavioral traits beyond this, such as rate limits, authentication needs, or pagination behavior. With annotations covering the core safety profile, a baseline 3 is appropriate as the description doesn't contradict annotations but adds minimal extra context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence: 'Look up FEC calendar events, filing deadlines, and election dates.' It's front-loaded with the core purpose, has zero wasted words, and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (12 parameters) and rich annotations (readOnlyHint, idempotentHint) plus 100% schema coverage and an output schema, the description is reasonably complete. It states the purpose clearly but lacks guidance on usage versus siblings. With structured data handling most details, the description's gaps are minor, making it nearly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with detailed parameter documentation in the input schema. The description mentions 'events, filing deadlines, and election dates' which aligns with the 'mode' parameter but adds no additional semantic context beyond what the schema provides. Given high schema coverage, the baseline score of 3 is justified.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Look up FEC calendar events, filing deadlines, and election dates.' It uses specific verbs ('look up') and identifies the resource (FEC calendar data). However, it doesn't explicitly differentiate from sibling tools like 'openfec_lookup_elections' which might overlap with election-related data, keeping it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or clarify if this is the primary tool for calendar data versus other lookup/search tools. Usage is implied by the description but not explicitly stated, leaving gaps for an AI agent to determine context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openfec_lookup_electionsOpenfec Lookup ElectionsARead-onlyIdempotentInspect
Look up federal election races and candidate financial summaries. Find who's running in a race with fundraising totals, or get an aggregate race summary.
| Name | Required | Description | Default |
|---|---|---|---|
| zip | No | ZIP code — finds races covering this ZIP. Search mode only. | |
| mode | No | search = candidates in a race with financial totals. summary = aggregate race financial summary. | search |
| cycle | Yes | Election cycle year (even years only, e.g. 2024). | |
| state | No | Two-letter US state code (e.g., AZ, CA). Required for senate/house unless zip is provided. | |
| office | Yes | Office sought: president, senate, or house. | |
| district | No | Two-digit district number (e.g. "07"). Required for house unless zip is provided. | |
| election_full | No | Expand to full election period (4yr president, 6yr senate, 2yr house). Default true. Ignored for ZIP-based searches. |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes | Election race records — candidates with financial data, or aggregate summary. |
| pagination | Yes | Page-based pagination metadata. |
| search_criteria | No | Echo of the search filters that produced this result set. Populated when results are empty to help diagnose why nothing matched. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare readOnlyHint=true and idempotentHint=true, so the agent knows this is a safe, repeatable read operation. The description adds useful context about what data is returned (financial summaries, fundraising totals, aggregate race summaries) but doesn't disclose rate limits, authentication requirements, or specific behavioral traits beyond what annotations provide. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place. The first sentence establishes the core purpose, and the second provides concrete use cases. No wasted words, and the most important information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has comprehensive annotations (readOnlyHint, idempotentHint), 100% schema description coverage, and an output schema exists, the description provides complete enough context. It clearly explains what the tool does and the two main use cases, which complements the structured data well without needing to explain return values or parameter details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all 7 parameters thoroughly. The description mentions the two modes (search and summary) which aligns with the mode parameter's enum values, but doesn't add significant semantic meaning beyond what the schema provides. The baseline of 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Look up federal election races and candidate financial summaries') and distinguishes this tool from siblings by focusing on election races rather than candidates, committees, contributions, or other entities. It provides two concrete use cases: finding who's running with fundraising totals or getting aggregate race summaries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('Find who's running in a race with fundraising totals, or get an aggregate race summary'), but doesn't explicitly mention when not to use it or name specific alternatives among the sibling tools. The input schema's mode parameter description helps differentiate between search and summary modes, but the tool description itself lacks explicit sibling comparison.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openfec_search_candidatesOpenfec Search CandidatesARead-onlyIdempotentInspect
Find federal candidates by name, state, office, party, or cycle. Retrieve a specific candidate by FEC ID with financial totals. Candidate IDs start with H (House), S (Senate), or P (President) followed by digits.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (1-indexed). Default 1. | |
| cycle | No | Two-year election cycle (even year, e.g., 2024). | |
| party | No | Three-letter party code (e.g., DEM, REP, LIB). | |
| query | No | Full-text candidate name search. | |
| state | No | Two-letter US state code (e.g., AZ, CA). | |
| office | No | Filter by office: H=House, S=Senate, P=President. | |
| district | No | Two-digit district number for House candidates. | |
| per_page | No | Results per page. Default 20, max 100. | |
| candidate_id | No | FEC candidate ID (e.g., P00003392, H2CO07170). When provided, returns a single candidate with full detail. | |
| election_year | No | Specific election year the candidate ran in. | |
| include_totals | No | Include financial totals (receipts, disbursements, cash on hand). Defaults to true when fetching by candidate_id. | |
| candidate_status | No | Candidate status: C=present, F=future, N=not yet, P=prior. | |
| has_raised_funds | No | Only candidates whose committee has received receipts. | |
| incumbent_challenge | No | Incumbent status: I=incumbent, C=challenger, O=open seat. |
Output Schema
| Name | Required | Description |
|---|---|---|
| totals | No | Financial totals (receipts, disbursements, cash_on_hand) when include_totals is true. |
| candidates | Yes | Candidate records with candidate_id, name, party, state, office, cycles, etc. |
| pagination | Yes | Page-based pagination metadata. |
| search_criteria | No | Echo of the search filters that produced this result set. Populated when results are empty to help diagnose why nothing matched. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, indicating safe, repeatable operations. The description adds valuable context beyond annotations: it specifies that candidate IDs start with H, S, or P followed by digits, and notes that financial totals are included when fetching by candidate_id. This enhances understanding without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by specific details in the second. Both sentences are essential, providing clear value without redundancy. It efficiently conveys key information in a compact format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (14 parameters, 100% schema coverage, annotations, and an output schema), the description is complete enough. It covers the tool's purpose, key behavioral details (like candidate ID format and financial totals), and relies on structured fields for parameter and output documentation, avoiding unnecessary repetition.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 14 parameters. The description adds minimal semantic context, such as the structure of candidate IDs and the inclusion of financial totals with candidate_id, but does not significantly enhance parameter understanding beyond what the schema provides. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Find', 'Retrieve') and resources ('federal candidates'), and distinguishes it from siblings by focusing on candidate search rather than committees, contributions, or other entities. It explicitly mentions retrieving by FEC ID with financial totals, which differentiates it from general search operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through examples (e.g., 'by name, state, office, party, or cycle') but does not explicitly state when to use this tool versus alternatives like openfec_lookup_elections or openfec_search_committees. It provides context for filtering but lacks explicit guidance on tool selection among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openfec_search_committeesOpenfec Search CommitteesARead-onlyIdempotentInspect
Find political committees (campaign, PAC, Super PAC, party) by name, type, candidate affiliation, or state. Retrieve a specific committee by FEC ID. Committee IDs start with C followed by digits (e.g., C00358796).
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (1-indexed). Default 1. | |
| cycle | No | Two-year election cycle (even year). | |
| party | No | Three-letter party code (e.g., DEM, REP). | |
| query | No | Full-text committee name search. | |
| state | No | Two-letter state code. | |
| per_page | No | Results per page. Default 20, max 100. | |
| designation | No | Committee designation. A (authorized), B (lobbyist PAC), D (leadership PAC), J (joint fundraiser), P (principal campaign), U (unauthorized). | |
| candidate_id | No | Find committees linked to this candidate (authorized, leadership, joint fundraising). | |
| committee_id | No | FEC committee ID (e.g., C00358796). Starts with 'C' followed by digits. Returns a single committee with full detail. | |
| committee_type | No | Committee type code. Common: H (House), S (Senate), P (Presidential), O (Super PAC), N (PAC nonqualified), Q (PAC qualified), X (Party nonqualified), Y (Party qualified). | |
| treasurer_name | No | Full-text treasurer name search. |
Output Schema
| Name | Required | Description |
|---|---|---|
| committees | Yes | Committee records with committee_id, name, type, designation, party, state, etc. |
| pagination | Yes | Page-based pagination metadata. |
| search_criteria | No | Echo of the search filters that produced this result set. Populated when results are empty to help diagnose why nothing matched. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, indicating safe, repeatable operations. The description adds valuable context beyond this: it specifies that committee IDs start with 'C' followed by digits (e.g., C00358796), which aids in correct usage. However, it does not mention behavioral details like pagination defaults or rate limits, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by specific details in the second. Both sentences are essential, with zero waste or redundancy, making it highly efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (11 parameters) and the presence of rich annotations (readOnlyHint, idempotentHint) and an output schema, the description is complete enough. It covers the purpose, usage context, and key behavioral details (committee ID format), without needing to explain return values or repeat schema information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 11 parameters. The description adds minimal parameter semantics beyond the schema, only noting the format for committee_id (starts with 'C' followed by digits). This meets the baseline of 3, as the schema does the heavy lifting, but the description provides limited extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Find', 'Retrieve') and resource ('political committees') with specific examples of committee types (campaign, PAC, Super PAC, party). It distinguishes this tool from siblings by focusing on committees rather than candidates, contributions, or other entities, making the purpose highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to search committees by various criteria (name, type, candidate affiliation, state) or retrieve a specific one by FEC ID. However, it does not explicitly mention when not to use it or name alternatives among sibling tools (e.g., openfec_search_candidates for candidate-related searches), which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openfec_search_contributionsOpenfec Search ContributionsARead-onlyIdempotentInspect
Search itemized individual contributions (Schedule A) or get aggregate breakdowns by size, state, employer, or occupation. Use to answer "who is funding this committee?" Itemized mode requires a committee_id. Aggregate by_size/by_state can use candidate_id instead.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Query mode. "itemized" returns individual contribution records (keyset pagination). "by_size" aggregates by contribution size bucket. "by_state" aggregates by contributor state. "by_employer" aggregates by employer. "by_occupation" aggregates by occupation. | itemized |
| sort | No | Sort field. Itemized only. | |
| cycle | No | Two-year election cycle (e.g., 2024). Even years only. Defaults to current cycle for itemized mode. | |
| cursor | No | Opaque pagination cursor from a previous response. Itemized mode only (keyset pagination). | |
| max_date | No | Latest contribution date (YYYY-MM-DD). Itemized only. | |
| min_date | No | Earliest contribution date (YYYY-MM-DD). Itemized only. | |
| per_page | No | Results per page (max 100). | |
| max_amount | No | Maximum contribution amount in dollars. Itemized only. | |
| min_amount | No | Minimum contribution amount in dollars. Itemized only. | |
| candidate_id | No | Candidate ID. Enables by_size and by_state aggregates without a committee_id. | |
| committee_id | No | Receiving committee ID (e.g., C00703975). | |
| is_individual | No | Only individual contributions (excludes committee-to-committee transfers). Itemized only. | |
| contributor_zip | No | ZIP code prefix (starts-with match). Itemized only. | |
| contributor_city | No | Contributor city. Itemized only. | |
| contributor_name | No | Full-text donor name search. Itemized only. | |
| contributor_state | No | Two-letter state code (e.g., CA). Itemized only. | |
| contributor_employer | No | Full-text employer search. Itemized only. | |
| contributor_occupation | No | Full-text occupation search. Itemized only. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | No | Total result count (may be approximate for itemized). |
| results | Yes | Contribution records (itemized) or aggregate rows. |
| pagination | No | Page-based pagination info (aggregate modes only). |
| next_cursor | No | Pagination cursor for the next page of itemized results. Null when no more pages. |
| search_criteria | No | Echo of the search filters that produced this result set. Populated when results are empty to help diagnose why nothing matched. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the annotations. While annotations indicate read-only and idempotent operations, the description reveals that itemized mode uses keyset pagination, different modes have different parameter requirements, and there are specific ID requirements for different query types. This provides practical implementation guidance not captured in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences that directly address purpose and usage. The first sentence establishes the dual functionality, and the second provides critical implementation guidance. Every word serves a clear purpose with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of comprehensive annotations (readOnlyHint, idempotentHint), 100% schema description coverage, and an output schema, the description provides exactly what's needed. It explains the tool's purpose, when to use it, and key behavioral considerations without duplicating information available in structured fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already thoroughly documents all 18 parameters. The description adds minimal parameter-specific information beyond the schema, mainly noting that committee_id is required for itemized mode and candidate_id can be used for some aggregate modes. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches itemized individual contributions (Schedule A) or gets aggregate breakdowns by specific categories. It specifies the resource (contributions) and distinguishes from siblings by focusing on funding analysis rather than other FEC data types like candidates or committees.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('to answer "who is funding this committee?"') and provides clear guidance on parameter requirements for different modes (itemized mode requires committee_id, aggregate modes can use candidate_id). It distinguishes usage scenarios based on the mode parameter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openfec_search_disbursementsOpenfec Search DisbursementsARead-onlyIdempotentInspect
Search itemized committee spending (Schedule B) or get aggregate breakdowns by purpose or recipient. All modes require a committee_id. Use to answer "what is this committee spending money on?" or "who is receiving payments from this committee?"
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Query mode. "itemized" returns individual disbursement records (keyset pagination). "by_purpose" aggregates by purpose category. "by_recipient" aggregates by recipient name. "by_recipient_id" aggregates by recipient committee ID (committee-to-committee transfers). | itemized |
| sort | No | Sort field. Itemized only. | |
| cycle | No | Two-year election cycle (e.g., 2024). Even years only. | |
| cursor | No | Opaque pagination cursor from a previous response. Itemized mode only (keyset pagination). | |
| max_date | No | Latest disbursement date (YYYY-MM-DD). Itemized only. | |
| min_date | No | Earliest disbursement date (YYYY-MM-DD). Itemized only. | |
| per_page | No | Results per page (max 100). | |
| max_amount | No | Maximum amount in dollars. Itemized only. | |
| min_amount | No | Minimum amount in dollars. Itemized only. | |
| committee_id | Yes | Spending committee ID (e.g., C00703975). Required for all modes. | |
| recipient_city | No | Recipient city. Itemized only. | |
| recipient_name | No | Full-text payee name search. Itemized only. | |
| recipient_state | No | Recipient state. Itemized only. | |
| recipient_committee_id | No | Recipient committee ID (for committee-to-committee transfers). Itemized only. | |
| disbursement_description | No | Full-text description search (e.g., "media buy", "consulting"). Itemized only. | |
| disbursement_purpose_category | No | Purpose category code. Itemized only. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | No | Total result count (may be approximate for itemized). |
| results | Yes | Disbursement records (itemized) or aggregate rows. |
| pagination | No | Page-based pagination info (aggregate modes only). |
| next_cursor | No | Pagination cursor for the next page of itemized results. Null when no more pages. |
| search_criteria | No | Echo of the search filters that produced this result set. Populated when results are empty to help diagnose why nothing matched. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the annotations. While annotations already declare readOnlyHint=true and idempotentHint=true, the description clarifies the tool's dual functionality (itemized vs. aggregate modes) and provides real-world use cases. It doesn't contradict annotations and adds meaningful operational context about what types of questions the tool can answer.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two focused sentences. The first sentence establishes core functionality, the second provides usage examples. Every word earns its place with zero redundancy or wasted text. It's front-loaded with the essential information about what the tool does.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the comprehensive annotations (readOnlyHint, idempotentHint), 100% schema description coverage, and presence of an output schema, the description provides exactly what's needed. It explains the tool's purpose, when to use it, and provides concrete examples without duplicating information available elsewhere in the structured data. The description is complete for this well-documented tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all 16 parameters thoroughly. The description doesn't add specific parameter semantics beyond what's in the schema, but it does provide high-level context about the different modes (itemized vs. aggregate) which helps understand parameter applicability. This meets the baseline expectation for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('search itemized committee spending' and 'get aggregate breakdowns') and identifies the resource ('Schedule B disbursements'). It distinguishes this tool from siblings by focusing specifically on committee spending rather than contributions, elections, filings, or other FEC data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance with concrete examples ('answer "what is this committee spending money on?" or "who is receiving payments from this committee?"'). It specifies the mandatory requirement ('All modes require a committee_id') and implicitly distinguishes from siblings by focusing on disbursements rather than contributions, expenditures, or other data types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openfec_search_expendituresOpenfec Search ExpendituresARead-onlyIdempotentInspect
Search independent expenditures (Schedule E) — outside spending supporting or opposing federal candidates. Covers Super PACs, party committees, and other groups. Use itemized mode for individual expenditure records, or by_candidate for aggregated totals per candidate.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Query mode. "itemized" returns individual expenditure records (keyset pagination). "by_candidate" returns aggregated totals per candidate by committee (page-based). | itemized |
| sort | No | Sort field. Itemized only. | |
| cycle | No | Two-year election cycle (e.g., 2024). Even years only. | |
| cursor | No | Opaque pagination cursor from a previous response. Itemized mode only (keyset pagination). | |
| max_date | No | Latest expenditure date (YYYY-MM-DD). Itemized only. | |
| min_date | No | Earliest expenditure date (YYYY-MM-DD). Itemized only. | |
| per_page | No | Results per page (max 100). | |
| is_notice | No | Only 24/48-hour notice filings (near-election spending). Itemized only. | |
| max_amount | No | Maximum expenditure amount in dollars. Itemized only. | |
| min_amount | No | Minimum expenditure amount in dollars. Itemized only. | |
| payee_name | No | Full-text payee name search. Itemized only. | |
| most_recent | No | Only the most recent version of amended filings. Itemized only. | |
| candidate_id | No | Targeted candidate ID (e.g., P00003392). | |
| committee_id | No | Spending committee ID (e.g., C00703975). | |
| support_oppose | No | S = support, O = oppose. Filter by whether the expenditure supports or opposes the candidate. | |
| candidate_party | No | Three-letter party code of the targeted candidate (e.g., DEM, REP). | |
| candidate_office | No | Office of the targeted candidate: H=House, S=Senate, P=President. | |
| candidate_office_state | No | Two-letter state code of the targeted race. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | No | Total result count (may be approximate for itemized). |
| results | Yes | Expenditure records (itemized) or per-candidate aggregate rows. |
| pagination | No | Page-based pagination info (by_candidate mode only). |
| next_cursor | No | Pagination cursor for the next page of itemized results. Null when no more pages. |
| search_criteria | No | Echo of the search filters that produced this result set. Populated when results are empty to help diagnose why nothing matched. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, covering safety and repeatability. The description adds valuable behavioral context by explaining what the two modes return (individual records vs. aggregated totals) and mentioning pagination approaches (keyset vs. page-based). However, it doesn't mention rate limits, authentication requirements, or data freshness constraints that might be relevant for this API.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured in two sentences: the first establishes purpose and domain context, the second provides crucial usage guidance. Every word earns its place, with zero redundancy or filler content. It's front-loaded with the most important information about what the tool does.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the comprehensive annotations (readOnly, idempotent), 100% schema coverage, and existence of an output schema, the description provides exactly what's needed. It explains the tool's purpose, distinguishes it from siblings, and clarifies the critical mode selection decision. The description doesn't need to explain return values or parameter details since those are covered elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all 18 parameters thoroughly. The description adds minimal parameter semantics beyond the schema - it only clarifies the purpose of the 'mode' parameter. While this is helpful context, it doesn't significantly enhance understanding of other parameters like date ranges, amounts, or filtering options.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search independent expenditures'), identifies the resource ('Schedule E'), and provides domain context ('outside spending supporting or opposing federal candidates'). It explicitly distinguishes this tool from siblings by focusing on expenditures rather than candidates, committees, contributions, or other FEC data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use each mode: 'Use itemized mode for individual expenditure records, or by_candidate for aggregated totals per candidate.' This gives clear alternatives within the tool itself, helping the agent choose between detailed records vs. aggregated summaries based on the user's need.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openfec_search_filingsOpenfec Search FilingsARead-onlyIdempotentInspect
Search FEC filings and reports by committee, candidate, form type, or date range. Covers financial reports (F3/F3P/F3X), statements of candidacy (F2), organizational filings (F1), 24-hour IE notices (F24), and amendments.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (1-indexed). Default 1. | |
| cycle | No | Two-year election cycle (even year). | |
| per_page | No | Results per page. Default 20, max 100. | |
| form_type | No | FEC form type. Common: F3 (House/Senate quarterly), F3P (Presidential), F3X (PAC/party), F24 (24-hour IE notice), F1 (statement of organization), F2 (statement of candidacy), F5 (IE by persons). | |
| filer_name | No | Full-text filer name search. | |
| is_amended | No | Filter to original or amended filings only. | |
| most_recent | No | Only the most recent version (filters out superseded amendments). Default true. | |
| report_type | No | Report type code. Common: Q1/Q2/Q3 (quarterly), YE (year-end), M3-M12 (monthly), 12G/12P/30G (pre/post election). | |
| report_year | No | Filing year. | |
| candidate_id | No | Associated candidate ID. | |
| committee_id | No | Filing committee ID. | |
| max_receipt_date | No | Latest FEC receipt date (YYYY-MM-DD). | |
| min_receipt_date | No | Earliest date FEC received the filing (YYYY-MM-DD). |
Output Schema
| Name | Required | Description |
|---|---|---|
| filings | Yes | Filing records with form_type, committee, report_type, financial totals, pdf_url, etc. |
| pagination | Yes | Page-based pagination metadata. |
| search_criteria | No | Echo of the search filters that produced this result set. Populated when results are empty to help diagnose why nothing matched. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, so the agent knows this is a safe, repeatable read operation. The description adds valuable behavioral context beyond annotations by specifying the scope of coverage (financial reports, statements of candidacy, organizational filings, etc.) and mentioning amendments. However, it doesn't describe pagination behavior or rate limits, which would be helpful additional context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the core purpose and searchable fields, and the second provides specific examples of form types covered. Every word contributes to understanding the tool's scope without redundancy or fluff, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (13 parameters) and the presence of both rich annotations (readOnlyHint, idempotentHint) and an output schema, the description is reasonably complete. It clearly defines the search scope and form types, which helps the agent understand what data is accessible. However, it could benefit from mentioning pagination behavior or typical use cases to fully guide the agent in a complex search scenario.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all 13 parameters thoroughly documented in the schema itself. The description adds minimal parameter semantics beyond the schema—it mentions searching by committee, candidate, form type, or date range, which aligns with parameters like committee_id, candidate_id, form_type, and date parameters, but doesn't provide additional syntax or format details. This meets the baseline expectation when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb ('Search') and resource ('FEC filings and reports'), and distinguishes this tool from siblings by specifying the exact scope of what it searches (committee, candidate, form type, date range) and listing specific form types covered. This makes it immediately clear this is a search tool for filings, not for candidates, committees, contributions, or other entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by listing what can be searched (committee, candidate, form type, date range) and providing examples of form types, but it doesn't explicitly state when to use this tool versus alternatives like openfec_search_candidates or openfec_search_committees. There's no guidance on prerequisites or exclusions, leaving the agent to infer appropriate usage from the parameter descriptions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openfec_search_legalOpenfec Search LegalARead-onlyIdempotentInspect
Search FEC legal documents: advisory opinions, enforcement cases (MURs), alternative dispute resolutions, and administrative fines.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Document type filter. Omit to search all types. admin_fines is slow without a query or respondent filter. | |
| query | No | Full-text search across legal documents. | |
| from_hit | No | Offset for pagination (0-indexed). Default 0. | |
| max_date | No | Latest document date (YYYY-MM-DD). | |
| min_date | No | Earliest document date (YYYY-MM-DD). | |
| ao_number | No | Specific advisory opinion number (e.g. "2024-01"). | |
| respondent | No | Respondent name (enforcement cases). | |
| case_number | No | Specific MUR or ADR case number. | |
| hits_returned | No | Results per page. Default 20, max 200. | |
| max_penalty_amount | No | Maximum penalty amount. | |
| min_penalty_amount | No | Minimum penalty amount (enforcement cases). | |
| statutory_citation | No | U.S.C. citation (e.g. "52 U.S.C. 30106"). | |
| regulatory_citation | No | CFR citation (e.g. "11 CFR 112.4"). |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes | Legal documents with a document_type discriminator (advisory_opinion, mur, adr, admin_fine, statute). |
| total_count | Yes | Total matching documents across all types. |
| search_criteria | No | Echo of the search filters that produced this result set. Populated when results are empty to help diagnose why nothing matched. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, so the agent knows this is a safe, repeatable read operation. The description adds valuable context about the specific document types being searched, which goes beyond what annotations provide. However, it doesn't mention performance characteristics (like the schema note about admin_fines being slow) or result format expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. Every word earns its place by specifying the resource and providing concrete examples of document types. There's zero wasted text or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has comprehensive annotations (readOnlyHint, idempotentHint), 100% schema description coverage, and an output schema exists, the description provides adequate context. It clearly states what the tool searches for, though it could benefit from mentioning the search scope or result format. The combination of structured data and description is mostly complete for this search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, all 13 parameters are well-documented in the schema itself. The description doesn't add any parameter-specific information beyond what's already in the schema descriptions. The baseline of 3 is appropriate when the schema does the heavy lifting of parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Search') and resource ('FEC legal documents') with specific examples of document types (advisory opinions, enforcement cases, etc.). It distinguishes itself from sibling tools like openfec_search_candidates or openfec_search_committees by focusing exclusively on legal documents rather than other FEC data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by listing the document types, but doesn't explicitly state when to use this tool versus alternatives. There's no guidance on prerequisites, performance considerations, or comparisons with other search tools. The context is clear but lacks explicit when/when-not instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!