Skip to main content
Glama
lzinga

US Government Open Data MCP

congress_senate_votes

Read-only

Retrieve Senate roll call vote results to analyze how senators voted on legislation, nominations, and procedural motions from 1989 to present.

Instructions

Get Senate roll call vote results from senate.gov XML. Shows how senators voted by party on specific legislation, nominations, and procedural motions. Coverage: 101st Congress (1989) to present. Cross-reference with: congress_house_votes (same bill's House vote), FEC (senator donors via fec_candidate_financials), lobbying_search (who lobbied on the bill), congress_member_bills (senator's voting vs sponsoring patterns). For House votes, use congress_house_votes.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
congressNoCongress number (default: current). Coverage: 101st (1989) to present
sessionNoSession (1 or 2). Default: current session (1 for odd years, 2 for even)
vote_numberNoSpecific roll call vote number. Omit to list recent votes.
limitNoMax results when listing votes (default: 20)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, and the description aligns by describing a data retrieval operation ('Get... results'). It adds valuable context beyond annotations, such as the data source ('senate.gov XML'), coverage timeframe, and cross-referencing suggestions, which aids in understanding the tool's behavior and integration with other tools. No contradictions with annotations are present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise, with no wasted sentences. It front-loads the core purpose, follows with usage details and cross-references, and ends with a clear alternative. Each sentence adds value, such as specifying data scope, coverage, and related tools, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (retrieving historical vote data) and the absence of an output schema, the description provides comprehensive context. It covers purpose, usage guidelines, data source, temporal coverage, and cross-references, which compensates for the lack of output details. With annotations indicating read-only behavior, this description is complete enough for effective tool selection and use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, providing clear details for all parameters (congress, session, vote_number, limit). The description does not add significant semantic information beyond the schema, as it only mentions coverage for 'congress' implicitly. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but doesn't need to.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get Senate roll call vote results') and resources ('from senate.gov XML'), detailing what data it provides ('Shows how senators voted by party on specific legislation, nominations, and procedural motions'). It distinguishes itself from siblings by explicitly naming a related tool (congress_house_votes) and contrasting its scope (Senate vs. House).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance, including when to use this tool ('For Senate votes') versus alternatives ('For House votes, use congress_house_votes'), and lists cross-references for complementary analysis (e.g., FEC, lobbying_search). It also specifies coverage limits ('Coverage: 101st Congress (1989) to present'), helping users decide when it's applicable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lzinga/us-government-open-data-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server