Skip to main content
Glama
lzinga

US Government Open Data MCP

lobbying_search

Search U.S. lobbying disclosure filings to identify who lobbies Congress, on which policy issues, and how much they spend.

Instructions

Search lobbying disclosure filings — find out who is lobbying Congress, on what issues, and how much they're spending.

Search by:

  • registrant_name: lobbying firm or self-filing org ('Pfizer', 'Amazon', 'National Rifle Association')

  • client_name: who hired the lobbyist ('Google', 'ExxonMobil')

  • issue_code: policy area ('TAX', 'HCR' health, 'DEF' defense, 'ENV' environment, 'ENG' energy, 'IMM' immigration)

  • filing_year: year of filing (2020-2026)

Returns expenses/income amounts, issues lobbied, and registrant/client info.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
registrant_nameNoLobbying firm or organization: 'Pfizer', 'Amazon', 'US Chamber of Commerce'
client_nameNoClient who hired the lobbyist: 'Google', 'Meta', 'Boeing'
issue_codeNoIssue area code: 'HCR' (Health Issues), 'MMM' (Medicare/Medicaid), 'TAX' (Taxation/Internal Revenue Code), 'BUD' (Budget/Appropriations), 'DEF' (Defense), 'ENV' (Environment/Superfund), 'ENG' (Energy/Nuclear), 'TRD' (Trade (Domestic/Foreign)), ... (20 total)
filing_yearNoYear: 2020-2026
filing_typeNoFiling type: 'Q1' (1st Quarter Report), 'Q2' (2nd Quarter Report), 'Q3' (3rd Quarter Report), 'Q4' (4th Quarter Report), 'MM' (Mid-Year Report), 'MY' (Year-End Report), 'RN' (Registration (New)), 'RA' (Registration Amendment), 'RR' (Registration Renewal), 'TE' (Termination)
page_sizeNoResults per page (default 20)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool 'returns expenses/income amounts, issues lobbied, and registrant/client info,' which gives some output context, but lacks details on pagination (implied by 'page_size' parameter), rate limits, authentication needs, error handling, or data freshness. For a search tool with 6 parameters and no annotations, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose. It uses bullet points for parameter examples efficiently, though the second paragraph could be more concise. Every sentence adds value, but slight redundancy exists between the description and schema examples (e.g., 'Pfizer' appears in both).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no output schema, no annotations), the description is moderately complete. It covers the purpose and some output details but lacks behavioral context like pagination, error handling, or data scope limitations. It compensates partially with parameter examples but does not fully address the gaps for a search tool without structured annotations or output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value by listing four parameters (registrant_name, client_name, issue_code, filing_year) with examples, but does not cover 'filing_type' or 'page_size', nor does it explain parameter interactions or search logic beyond what the schema provides. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('search lobbying disclosure filings') and resources ('filings'), and distinguishes itself from siblings by focusing on lobbying data rather than other government datasets like BEA, BLS, or census tools. It explicitly answers 'who is lobbying Congress, on what issues, and how much they're spending.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention sibling tools like 'lobbying_detail', 'lobbying_contributions', or 'lobbying_registrants', nor does it specify prerequisites, exclusions, or comparative use cases. Usage is implied through parameter examples but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lzinga/us-government-open-data-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server