Skip to main content
Glama

congressgov-mcp-server

Server Details

Access U.S. congressional data - bills, votes, members, committees - via MCP.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
cyanheads/congressgov-mcp-server
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

10 tools
congressgov_bill_lookupCongressgov Bill LookupA
Read-onlyIdempotent
Inspect

Browse and retrieve U.S. legislative bill data from Congress.gov.

IMPORTANT: This API has no keyword search. To find bills, filter by congress number, bill type, and/or date range. Use 'congressgov_bill_summaries' to discover recently summarized legislation, or 'congressgov_member_lookup' to find bills via their sponsor.

Operations:

  • list: Browse bills. Requires 'congress'. Add 'billType' to narrow by chamber/type.

  • get: Full bill detail including sponsor, policy area, CBO estimates, and law info.

  • actions/amendments/cosponsors/committees/subjects/summaries/text/titles/related: Sub-resources for a specific bill. Require congress + billType + billNumber.

For enacted laws, use 'congressgov_enacted_laws' instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResults per page (1-250).
offsetNoPagination offset.
billTypeNoBill type code. Required for get and sub-resource operations.
congressYesCongress number (e.g., 118, 119).
operationYesWhich data to retrieve.
billNumberNoBill number. Required for get and sub-resource operations.
toDateTimeNoEnd of date range filter (ISO 8601).
fromDateTimeNoStart of date range filter (ISO 8601). Filters by latest action date.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true. The description adds critical behavioral context not in annotations: the API 'has no keyword search' (major functional limitation) and explains the sub-resource operational structure (actions/amendments/etc.). Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections: purpose statement, critical limitation (front-loaded), alternative tools, operations breakdown, and sibling replacement guidance. Every sentence provides actionable information without redundancy. Efficient use of space for complex multi-operation tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage, presence of output schema, and helpful annotations, the description is complete. It covers the complex operational modes (list/get/sub-resources), explains the discovery pattern (filtering vs searching), and directs to appropriate siblings. No gaps remain for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, baseline is 3. The description adds valuable semantic context by explaining parameter relationships: 'Requires congress', 'Add billType to narrow', and sub-resources 'Require congress + billType + billNumber', clarifying conditional requirements beyond raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Browse[s] and retrieve[s] U.S. legislative bill data from Congress.gov' with specific verbs and resource identification. It clearly distinguishes from siblings by explicitly naming 'congressgov_bill_summaries', 'congressgov_member_lookup', and 'congressgov_enacted_laws' as alternatives for different use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance with 'IMPORTANT: This API has no keyword search' and instructs to 'filter by congress number, bill type, and/or date range.' Explicitly states alternatives: 'Use congressgov_bill_summaries to discover recently summarized legislation' and 'For enacted laws, use congressgov_enacted_laws instead.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

congressgov_bill_summariesCongressgov Bill SummariesA
Read-onlyIdempotent
Inspect

Browse recent CRS (Congressional Research Service) bill summaries.

This is the best tool for answering "what's happening in Congress?" — CRS analysts write plain-language summaries of bills at each legislative stage.

By default, returns summaries from the last 7 days. Specify fromDateTime/toDateTime for custom ranges. Each summary includes the associated bill reference (congress, type, number) for follow-up with congressgov_bill_lookup.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResults per page (1-250).
offsetNoPagination offset.
billTypeNoBill type filter. Requires 'congress'.
congressNoCongress number. Optional — omit for summaries across all congresses.
toDateTimeNoEnd of date range (ISO 8601). Defaults to now.
fromDateTimeNoStart of date range (ISO 8601). Defaults to 7 days ago if neither date param is set.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations establish read-only/idempotent safety, the description adds crucial behavioral context: the 7-day default temporal window, the ability to customize ranges via date parameters, and the structural guarantee that returned summaries include bill references for chaining queries.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three precisely structured sentences with zero waste: (1) core capability, (2) value proposition and use case, (3) default behavior and technical constraints. Information is front-loaded and every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and 100% input schema coverage, the description provides sufficient narrative context—the CRS authorship explanation, the 'recent' focus, and the lookup chaining guidance—without redundant parameter enumeration.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds value by aggregating the date parameter logic (explaining the 7-day default applies to the result set) and emphasizing the follow-up utility of returned bill references, though it could briefly acknowledge the billType/congress filtering relationship mentioned in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Browse') and resource ('CRS bill summaries'), explicitly distinguishes this from generic bill lookup by highlighting CRS analyst authorship, and clarifies the scope ('recent') to differentiate from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly identifies this as 'the best tool for answering "what's happening in Congress?"' and references the sibling tool 'congressgov_bill_lookup' for follow-up, providing clear navigation between tools. Lacks explicit exclusion criteria (when not to use), preventing a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

congressgov_committee_lookupCongressgov Committee LookupA
Read-onlyIdempotent
Inspect

Browse congressional committees and their legislation, reports, and nominations.

Committee codes follow the pattern: chamber prefix (h/s/j) + abbreviation + number. Use 'list' to discover codes, then drill into bills, reports, or nominations.

The 'nominations' operation is available for Senate committees only. The committeeCode also works with the congress://committee/{committeeCode} resource.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResults per page (1-250).
offsetNoPagination offset.
chamberNoChamber filter. Required for 'get' and sub-resources.
congressNoCongress number.
operationYesWhich data to retrieve.
committeeCodeNoCommittee system code (e.g., 'hsju00'). Required for get and sub-resources.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish the tool is read-only, idempotent, and open-world. The description adds valuable behavioral context not in annotations: the committeeCode syntax pattern (chamber prefix + abbreviation + number), the specific constraint that nominations are Senate-only, and a cross-reference to the congress://committee/{committeeCode} resource URI pattern.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured across three sentences: purpose statement, workflow/code-pattern instructions, and operational constraints/cross-references. Every sentence delivers unique value with no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (relieving the description from detailing return values), full schema coverage, and annotations, the description provides adequate contextual completeness. It covers discovery workflows, parameter interdependencies, and resource linking appropriate for a 6-parameter browsing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description elevates this by explaining the committeeCode construction pattern (h/s/j + abbreviation + number) beyond the schema's simple example, and clarifies operational relationships between parameters (e.g., using 'list' to discover codes needed for other operations).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool browses committees and their associated legislation, reports, and nominations, using specific verbs and resources. It positions the tool as committee-centric, which implicitly distinguishes it from sibling tools like congressgov_committee_reports or congressgov_senate_nominations that focus on specific resource types rather than the committee container itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit workflow guidance ('Use 'list' to discover codes, then drill into...') and critical operational constraints ('The 'nominations' operation is available for Senate committees only'). While it doesn't explicitly compare against sibling alternatives, the workflow pattern effectively guides selection of the correct operation sequence.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

congressgov_committee_reportsCongressgov Committee ReportsA
Read-onlyIdempotent
Inspect

Browse and retrieve committee reports from Congress.gov.

Committee reports accompany legislation reported out of committee. They explain the bill's purpose, committee amendments, dissenting views, and the committee vote.

Report types:

  • hrpt: House reports

  • srpt: Senate reports

  • erpt: Executive reports

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResults per page (1-250).
offsetNoPagination offset.
congressYesCongress number.
operationYesWhich data to retrieve.
reportTypeNoReport type. Required for get and text operations.
reportNumberNoCommittee report number. Required for get and text operations.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly, openWorld, and idempotent hints. The description adds valuable domain context explaining that reports 'accompany legislation reported out of committee' and detailing their contents, which helps the agent understand the data's nature and relationship to the legislative process. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three distinct sections: purpose statement, domain context (what reports contain), and parameter reference (report types). Every sentence provides value with no redundancy or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (not shown but indicated) and complete parameter documentation, the description appropriately focuses on domain explanation rather than return values. It adequately covers the tool's scope, though it could optionally mention pagination behavior implied by the limit/offset parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% coverage, the description adds semantic value by mapping enum values to human-readable meanings: 'hrpt: House reports, srpt: Senate reports, erpt: Executive reports'. It also frames the operations as 'Browse and retrieve', adding intuitive meaning to the 'list', 'get', and 'text' operations beyond the schema's technical descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Browse[s] and retrieve[s] committee reports from Congress.gov' with specific verbs and resource. It distinguishes from siblings like congressgov_crs_reports and congressgov_committee_lookup by explicitly focusing on 'committee reports' (documents accompanying legislation) rather than committees themselves or research reports.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains what committee reports contain (purpose, amendments, dissenting views, votes), which provides implied usage guidance for when users need legislative documentation. However, it lacks explicit when-to-use guidance, prerequisites, or named alternatives from the sibling tool set.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

congressgov_crs_reportsCongressgov Crs ReportsA
Read-onlyIdempotent
Inspect

Browse and retrieve CRS (Congressional Research Service) reports — nonpartisan policy analyses written by subject-matter experts at the Library of Congress.

CRS reports cover policy areas, legislative proposals, and legal questions. Report IDs use letter-number codes (e.g., R40097, RL33612, IF12345).

Use 'list' to browse available reports, 'get' for full detail including authors, topics, summary, and available download formats.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResults per page (1-250).
offsetNoPagination offset.
operationYesWhich data to retrieve.
reportNumberNoCRS report ID (e.g., 'R40097'). Required for 'get'.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Complements annotations (readOnly, idempotent) by disclosing Report ID format patterns ('letter-number codes' with multiple examples R40097, RL33612, IF12345) and previewing what 'get' returns (authors, topics, summary, download formats). This adds valuable context about data structure that annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences with zero waste. Front-loaded with purpose, followed by domain context and ID format, ending with operation guidance. Every sentence contributes distinct information (definition, identification pattern, usage modes).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately focuses on input semantics and operation selection rather than return values. It adequately covers the 4-parameter complexity, dual-operation modes, and provides sufficient domain context for an agent to select this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds value by expanding on the reportNumber format with additional ID examples (RL33612, IF12345) beyond the schema's single example, and clarifying the semantic difference between 'list' (browse) and 'get' (full detail) operations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with specific verbs ('Browse and retrieve') and clearly identifies the resource (CRS reports). It distinguishes from siblings by defining CRS reports as 'nonpartisan policy analyses' from the Library of Congress, clearly differentiating from bills, committees, or votes handled by other tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use 'list' versus 'get' operations ('Use 'list' to browse... 'get' for full detail'). While it doesn't explicitly contrast with sibling tools, the domain (CRS reports) is distinct enough from bills/committees that the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

congressgov_daily_recordCongressgov Daily RecordA
Read-onlyIdempotent
Inspect

Browse the daily Congressional Record — floor speeches, debates, and legislative text published each day Congress is in session.

Navigation is hierarchical: list → volumes, issues → individual articles. Use 'list' to find recent volumes, 'issues' to see what's in a volume, and 'articles' to access individual speeches and debate sections.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResults per page (1-250).
offsetNoPagination offset.
operationYesWhich data to retrieve.
issueNumberNoIssue number within a volume. Required for 'articles'.
volumeNumberNoVolume number. Required for 'issues' and 'articles'.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable behavioral context beyond annotations: explains the hierarchical navigation model essential for correct usage, and notes temporal scope ('each day Congress is in session'). No contradictions with readOnlyHint/idempotentHint annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste. First sentence establishes purpose and content scope; second explains the hierarchical navigation pattern. Information is front-loaded and dense.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists and parameter schemas are complete, the description adequately covers the conceptual model (hierarchical navigation) and content types. Could mention pagination behavior, but sufficiently complete for an agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100%, the description adds crucial semantic context by explaining what each operation value actually retrieves ('list' finds recent volumes, 'articles' accesses speeches), enhancing the schema's generic 'Which data to retrieve' description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool browses the 'daily Congressional Record' and specifies content types (floor speeches, debates, legislative text), clearly distinguishing it from sibling tools focused on bills, committees, or members.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear hierarchical workflow guidance ('list → volumes, issues → individual articles') and maps each operation to its specific use case. Lacks explicit contrast with siblings (e.g., when to use this vs bill_lookup for legislative text), but the content scope implicitly guides selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

congressgov_enacted_lawsCongressgov Enacted LawsA
Read-onlyIdempotent
Inspect

Browse enacted public and private laws from Congress.gov.

Use 'list' to browse laws by congress. Each law references its origin bill — use 'congressgov_bill_lookup' with that reference for the full legislative history.

Law types:

  • pub: Public laws (general application, most common)

  • priv: Private laws (specific individuals or entities)

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResults per page (1-250).
offsetNoPagination offset.
lawTypeNoLaw type. Required for 'get'.
congressYesCongress number.
lawNumberNoLaw number. Required for 'get'.
operationYesWhich data to retrieve.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and openWorldHint. The description adds valuable behavioral context beyond these: it discloses that laws reference origin bills (a data relationship), and explains the semantic distinction between public laws ('general application, most common') and private laws ('specific individuals'), which helps the agent interpret results correctly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero waste: the first sentence establishes purpose, the second provides usage guidance and sibling differentiation, and the bullet points clarify domain terminology. Every sentence earns its place and the information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and comprehensive annotations, the description provides sufficient contextual completeness. It covers the domain-specific law types and data relationships (origin bill references) that would help an agent interpret outputs, without needing to duplicate the output schema structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description elevates this by adding domain-specific semantics for the lawType parameter (explaining 'pub' vs 'priv' in detail with usage frequency and scope) and clarifying the intent of the 'list' operation ('browse laws by congress'), adding meaning beyond the raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific verb 'Browse' and clearly identifies the resource as 'enacted public and private laws.' It effectively distinguishes from sibling tools by stating that 'congressgov_bill_lookup' should be used for 'full legislative history,' implying this tool is for high-level browsing rather than deep bill analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly directs users to use 'congressgov_bill_lookup' for full legislative history when they have a reference from this tool, providing clear alternative routing. It explains the 'list' operation for browsing by congress, though it could explicitly contrast when to use 'get' versus 'list' beyond what the schema implies.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

congressgov_member_lookupCongressgov Member LookupA
Read-onlyIdempotent
Inspect

Discover congressional members and their legislative activity.

The API does not support name search. To find a member:

  • By location: use 'list' with stateCode (and optionally district)

  • By congress: use 'list' with congress number

  • By current status: use 'list' with currentMember=true

Once you have a bioguideId, use 'get' for full profile or 'sponsored'/'cosponsored' for legislative portfolio. The bioguideId also works with the congress://member/{bioguideId} resource.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResults per page (1-250).
offsetNoPagination offset.
congressNoCongress number to filter by.
districtNoCongressional district number. Requires stateCode. Use 0 for at-large.
operationYesWhich data to retrieve.
stateCodeNoTwo-letter state code (e.g., 'CA', 'TX').
bioguideIdNoUnique member identifier (e.g., 'P000197'). Required for get/sponsored/cosponsored.
currentMemberNoFilter to currently serving members. Defaults to false.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only, idempotent, open-world properties. Description adds valuable behavioral constraints (name search limitation) and workflow patterns (discovery-then-detail retrieval sequence) without contradicting annotations. Mentions resource URI pattern for integration context. Does not mention rate limits or pagination behavior, but covers primary behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Exceptionally structured with bullet points for scanability. No wasted words; every sentence provides critical workflow guidance, API constraints, or navigation logic. Appropriately front-loaded with the 'no name search' limitation to prevent immediate misuse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a multi-operation tool. Output schema exists (obviating return value description). Description covers all four operation modes, parameter relationships (stateCode+district), the bioguideId workflow, and resource URI patterns. Sufficient for an agent to navigate the discovery process end-to-end.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, establishing baseline 3. Description adds workflow narrative showing how parameters interact (e.g., using bioguideId with specific operations), but individual parameter semantics (types, formats, requirements) are comprehensively documented in the schema itself without needing elaboration.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'Discover' with clear resource 'congressional members and their legislative activity'. Explicitly distinguishes from sibling tools (which focus on bills, committees, votes, etc.) by focusing on member discovery and legislative portfolios. The multi-operation workflow (list → get/sponsored/cosponsored) is clearly articulated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides exceptional guidance including critical negative constraint 'The API does not support name search'. Offers three specific discovery recipes (by location with stateCode/district, by congress number, by currentMember status). Explicitly maps operations to use cases: 'list' for discovery, 'get' for profiles, 'sponsored'/'cosponsored' for legislative activity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

congressgov_roll_votesCongressgov Roll VotesA
Read-onlyIdempotent
Inspect

Retrieve House roll call vote data and individual member voting positions.

NOTE: Covers House votes only — Senate vote data is not yet in the Congress.gov API.

Use 'list' to find votes by congress and session, 'get' for vote details (question, result, associated bill), and 'members' for how each representative voted.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResults per page (1-250).
offsetNoPagination offset.
sessionYesSession number (1 or 2). Odd years are session 1, even years session 2.
congressYesCongress number.
operationYesWhich data to retrieve.
voteNumberNoRoll call vote number. Required for 'get' and 'members'.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish that the tool is read-only, idempotent, and open-world, so the description does not need to cover safety properties. It adds valuable behavioral context by disclosing the House-only coverage limitation, which is not evident from the structured fields. It also clarifies the specific return content for each operation (e.g., 'get' returns question, result, associated bill), adding practical behavioral detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description comprises three highly efficient sentences with zero filler content. The core purpose appears immediately in the first sentence, followed by critical scope limitations, and concluding with operation-specific usage guidance. Every sentence earns its place by providing distinct, non-redundant information essential for tool invocation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the existence of an output schema and comprehensive input schema coverage, the description appropriately focuses on operational semantics and scope limitations rather than repeating structural details. It adequately covers the three operation modes and the House-only constraint. It could optionally mention pagination patterns or rate limiting behavior, but this is not strictly necessary given the schema richness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the structured fields already document parameter syntax. The description adds significant semantic value by explaining the functional purpose of each 'operation' enum value (list vs. get vs. members), which helps agents understand the workflow implications of their selection. It effectively compensates for the schema's lack of behavioral context regarding when to use each operation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'Retrieve' and clearly identifies the resource as 'House roll call vote data and individual member voting positions.' It effectively distinguishes itself from siblings by explicitly stating 'Covers House votes only,' clarifying that Senate data is unavailable and differentiating from general member lookup tools by specifying voting positions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit mapping of each 'operation' enum value to its use case: 'list' for finding votes, 'get' for vote details, and 'members' for individual representative votes. It includes the critical limitation that Senate data is not available, guiding correct usage. While it effectively documents internal operation selection, it does not explicitly contrast when to use this versus sibling tools like congressgov_member_lookup.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

congressgov_senate_nominationsCongressgov Senate NominationsA
Read-onlyIdempotent
Inspect

Browse presidential nominations to federal positions and track the Senate confirmation process.

Nominations use 'PN' (Presidential Nomination) numbering. A single nomination may contain multiple nominees — use 'nominees' to see individual appointees.

Partitioned nominations (e.g., PN230-1, PN230-2) occur when nominees within one nomination follow different confirmation paths.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResults per page (1-250).
offsetNoPagination offset.
ordinalNoPosition ordinal within a nomination (for multi-nominee nominations).
congressYesCongress number.
operationYesWhich data to retrieve.
nominationNumberNoNomination number (e.g., '1064'). Required for detail operations.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent safety; description adds valuable domain context without contradiction. It discloses the 'PN' numbering system, multi-nominee structure, and partitioned nomination behavior—critical context for interpreting parameters that annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three information-dense sentences with zero waste. Front-loaded with purpose ('Browse...track'), followed by critical domain concepts (PN numbering, multi-nominee structure), and ending with partitioned nomination edge cases. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the multi-operation complexity (6 operations), 100% schema coverage, and presence of output schema, the description efficiently covers the essential domain model (PN system, partitioning) needed to interpret parameters correctly. No need to describe return values since output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% providing baseline documentation, but description adds essential domain semantics: it clarifies nominationNumber format ('PN' prefix, examples like PN230-1), explains the 'nominees' operation purpose, and contextualizes the 'ordinal' parameter via partitioned nomination explanation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verbs ('Browse', 'track') and clear resources ('presidential nominations', 'Senate confirmation process'). It distinguishes from siblings (bill_lookup, member_lookup, etc.) by focusing exclusively on nominations domain. The PN numbering explanation further sharpens the scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for specific operations by stating 'use 'nominees' to see individual appointees' and explains partitioned nominations (PN230-1, PN230-2) which guides parameter selection. Lacks explicit 'when not to use' or sibling comparisons, though the domain separation is obvious.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.