Congress
Server Details
Congress MCP — US Congress data via GovTrack API (free, no auth required)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-congress
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 9 of 9 tools scored. Lowest: 3.2/5.
Most tools have distinct purposes (e.g., get_bill, search_bills, get_members, get_votes), but ask_pipeworx and discover_tools overlap in function — both help find information or tools, causing potential confusion.
Tool names follow a mostly consistent verb_noun pattern (get_bill, get_members, get_votes, search_bills, remember, recall, forget). However, ask_pipeworx and discover_tools break the pattern by not including an action on a domain object.
With 9 tools, the count is well-scoped for a Congress-focused server. Each tool serves a clear purpose, and there are no redundant or superfluous tools.
The server covers bill searching, details, member listing, and vote retrieval — core congressional info. However, it lacks operations like updating bills or accessing committee details, and the memory tools seem tangential to the domain.
Available Tools
9 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It explains the tool picks the right tool and fills arguments, but does not disclose any side effects, auth needs, rate limits, or data source limitations. A score of 3 is appropriate as it adds some transparency beyond the input schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences and includes examples. It is front-loaded with the core purpose, though the examples could be slightly more varied to cover different use cases.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has a single parameter and no output schema, the description adequately explains its function. However, it could mention that the tool might use external data sources or that results may vary, but it is largely complete for its simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds context by explaining the 'question' parameter should be a natural language request, with examples. This provides practical guidance beyond the schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool accepts plain English questions and returns answers from the best data source. It distinguishes itself from sibling tools by being a general-purpose question-answering tool, unlike specific tools like get_bill or search_bills.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description advises using the tool for natural language requests without needing to browse tools or learn schemas, providing examples. However, it does not explicitly state when not to use it or mention alternatives among sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that the tool performs a search based on a description and returns tool names and descriptions. However, it does not mention whether the search is case-sensitive, how ranking works, or if there are any side effects (likely none). The behavioral traits are mostly clear for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each with a distinct role: first states the core function, second describes the output, third provides a usage directive. No wasted words; front-loaded with the key action 'Search the Pipeworx tool catalog'.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is a simple search with 2 parameters and no output schema, the description adequately explains its purpose and usage. It lacks details like whether the search is fuzzy or exact, but for a discovery tool, the provided context is sufficient for an agent to invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds value by providing an example query format ('analyze housing market trends') and clarifying the purpose of the query parameter beyond the schema's generic description. The limit parameter is self-explanatory from the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches the Pipeworx tool catalog by a natural language query and returns the most relevant tools with names and descriptions. It specifies a concrete action ('search') and resource ('tool catalog'), and distinguishes itself from siblings like 'search_bills' which operate on different data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to call this first when 500+ tools are available, providing a clear use case and ordering relative to other tools. The phrase 'need to find the right ones' implies it is a discovery tool before using specific tools like 'get_bill' or 'search_bills'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description indicates the tool is destructive (deletion), which is clear. However, without annotations, it does not disclose whether deletion is permanent, reversible, or requires confirmation, nor any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that front-loads the action and resource, with no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 required param, no output schema, no annotations), the description is nearly complete. It could mention whether the operation is idempotent or what happens if the key does not exist, but for a straightforward delete, it suffices.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers 100% of parameters with descriptions. The tool's description adds no extra parameter meaning beyond 'key' to identify which memory to delete. Since schema coverage is high, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete'), the resource ('stored memory'), and the identifier ('by key'). It distinguishes itself from siblings like 'remember' (store) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when a memory needs to be deleted by key, but does not provide explicit guidance on when to use alternatives (e.g., if multiple keys need deletion, or if deletion is conditional).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_billAInspect
Get full details for a congressional bill by its ID. Returns text, sponsors, cosponsors, committee assignments, actions, and vote history.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | GovTrack bill ID (numeric) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description indicates the tool fetches details, which is a read operation. No annotations exist, so the description doesn't provide additional behavioral traits (e.g., rate limits, caching, data freshness). It adequately conveys non-destructive behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no fluff. Perfectly concise and front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a simple single-parameter tool with no output schema, the description is nearly complete. It could optionally mention return format or example usage, but not necessary for clarity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description doesn't add further meaning to the single parameter 'id' beyond what the schema provides (numeric GovTrack ID).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it retrieves full details for a single bill by a specific ID. The verb 'Get' and resource 'bill' are precise, and the scope 'full details' is well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when a specific bill's full details are needed. However, it doesn't explicitly state when not to use this tool (e.g., for searching bills) or mention alternatives like search_bills.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_membersBInspect
Get current members of Congress with their name, party, state, district (for representatives), and contact information.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (default: 50, max: 600) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description provides basic behavioral info: returns current members, not historical. No mention of performance, rate limits, or pagination. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Short two-sentence description is efficient. First sentence states purpose, second lists output fields. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple schema (one optional param) and no output schema, the description adequately covers purpose and output. Could mention default behavior or pagination but not critical for a simple list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single 'limit' parameter described. Description does not add extra meaning beyond schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves current members of Congress, specifying it includes senators and representatives, and lists returned fields. Distinguishes from sibling tools like get_bill which focuses on legislation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance. However, the description implies it's for general member lookup. No mention of alternatives for filtered queries (e.g., by state), but siblings don't directly overlap.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_votesBInspect
Get recent congressional votes on bills. Returns question, result, chamber, vote counts (yes/no/abstain), date, and related bill.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of votes to return (default: 20, max: 100) | |
| congress | No | Congress number to filter by (e.g., 119) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full behavioral disclosure burden. It correctly indicates that the tool retrieves data (non-destructive) and specifies the output fields. However, it does not mention any additional behaviors like rate limits, required permissions, or default behavior when no filters are applied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of reasonable length. It is front-loaded with the main action ('Get recent congressional votes') and then lists the returned fields. It could be slightly more concise by not including 'if any'.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 optional parameters, no output schema), the description is adequate but could mention the default ordering or time range. It does not explain the meaning of 'recent' or whether results are paginated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes the parameters (limit and congress). The description does not add extra meaning beyond what the schema provides, so a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states what the tool does ('Get recent congressional votes') and lists the key information returned (question, result, chamber, vote counts, related bill). It distinguishes itself from siblings like 'get_bill' or 'search_bills' by focusing on votes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'search_bills' or 'get_bill'. There is no mention of prerequisites, context, or limitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the dual behavior (retrieve by key vs list all) and mentions persistence across sessions. However, it does not state whether the tool is read-only or if it has side effects. The description is adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core functionality, and every word adds value. It is concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no output schema and no annotations, the description covers the main usage scenarios. It could be more complete by mentioning if the tool is read-only or if keys are case-sensitive, but it is sufficient for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (key described in schema). The description adds that omitting key lists all memories, which complements the schema's 'omit to list all keys' hint. The description adds value beyond the schema by clarifying the retrieval behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a memory by key or lists all memories when key is omitted. It explicitly distinguishes two modes of operation with a specific verb 'Retrieve' and 'list'. No sibling tools have similar functionality, so no confusion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says to use it to retrieve context saved earlier, implying when to use. It doesn't explicitly state when not to use or alternatives, but given the sibling tools (remember, forget), the context is clear. The guidance is clear enough for an agent to decide.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses memory persistence behavior: authenticated users get persistent memory, anonymous sessions last 24 hours. No annotations are provided, so description carries full burden. No contradictions. Would benefit from mentioning storage limits or overwrite behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each with a distinct purpose: action, use cases, persistence details. Efficient and front-loaded. Minor improvement: could be slightly more concise by merging first two sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple (2 params, no output schema). Description covers purpose, use cases, and persistence. Could mention if overwriting is allowed or if values are automatically trimmed. Overall adequate for a straightforward memory tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with clear parameter examples. Description adds value by explaining purpose of key-value pair and context of use, complementing schema examples like 'subject_property'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Store a key-value pair in your session memory', specifying verb (store) and resource (key-value pair). Distinguishes from sibling 'recall' and 'forget' by naming the action and memory type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use cases: 'save intermediate findings, user preferences, or context across tool calls'. Also mentions persistence differences between authenticated and anonymous users, but does not explicitly say when not to use it (e.g., for large data).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_billsAInspect
Search US congressional bills by keyword. Returns bill type, number, title, status, sponsor, and introduction date. Use get_bill with the ID for full details.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (default: 10, max: 100) | |
| query | Yes | Keywords to search for in bill titles |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It states the search is by keyword on bill titles, which is useful, but does not disclose whether it searches other fields, any pagination behavior, or rate limits. This is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with the main purpose and followed by the return fields. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description compensates by listing return fields. It also mentions limit and query in the schema. The tool is simple, and the description is sufficiently complete for search.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes both parameters. The description adds no extra semantics beyond the field list, but the schema does a good job. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search', the resource 'US congressional bills', and the search criterion 'by keyword'. It also lists the key fields returned, distinguishing it from sibling tools like get_bill, get_members, and get_votes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when searching by keyword for bills, but does not explicitly state when not to use it or suggest alternatives among siblings. However, given the tool name and context, usage is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!