Govbase
Server Details
U.S. federal policy data — bills, Congress members, voting records, and civic info.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
10 toolsget_memberGet Member of Congress DetailsARead-onlyInspect
Retrieve detailed information about a specific U.S. member of Congress by their Bioguide ID (e.g., "P000197" for Nancy Pelosi).
| Name | Required | Description | Default |
|---|---|---|---|
| bioguide_id | Yes | Bioguide ID (e.g., "P000197") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description is consistent with the readOnlyHint annotation (both indicate a safe read operation). It adds valuable context by providing a concrete example mapping P000197 to Nancy Pelosi, helping agents understand the ID format and data type. It does not disclose error handling or rate limits, but annotations cover the critical safety profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is front-loaded with the action verb. The example is integrated efficiently without redundancy. Every word earns its place—'detailed' signals scope, the example clarifies the ID format, and no filler text is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter read operation with good schema coverage and annotations, the description is complete. It clarifies the input requirement and gives a representative example. No output schema exists, so return value explanation is not required per evaluation rules.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage with the mechanical description 'Bioguide ID (e.g., P000197)', the description adds semantic meaning by identifying the example ID as belonging to Nancy Pelosi. This helps agents understand that the parameter represents a specific person identifier, not just an arbitrary code.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Retrieve' with clear resource 'detailed information about a specific U.S. member of Congress'. The phrase 'by their Bioguide ID' effectively distinguishes this from sibling search_members (which searches without ID) and get_member_vote (which retrieves votes, not details).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying the Bioguide ID requirement and providing an example (P000197 for Nancy Pelosi), signaling this is for direct lookup when an ID is known. However, it does not explicitly name search_members as the alternative for when the ID is unknown.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_member_voteGet Member Vote on PolicyARead-onlyInspect
Check how a specific U.S. member of Congress voted on a policy by providing their Bioguide ID and the policy ID. Returns all roll call votes for that member on the given policy.
| Name | Required | Description | Default |
|---|---|---|---|
| policy_id | Yes | Policy ID to check vote on | |
| bioguide_id | Yes | Bioguide ID of the member |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotation provides readOnlyHint=true, confirming safe read access. The description adds valuable behavioral context beyond annotations: it specifies the scope ('U.S. member of Congress'), clarifies the return type ('all roll call votes' plural), and distinguishes this from single-vote lookups. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficiently structured sentences with zero redundancy: the first establishes purpose and inputs, the second clarifies return behavior. Information is front-loaded and every phrase earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter input structure and read-only nature, the description adequately covers the tool's function. The mention of 'roll call votes' partially compensates for the missing output schema. Minor gap: could hint at where to obtain Bioguide IDs given sibling tools exist, but this is not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured data already documents both parameters adequately. The description mentions both IDs but adds no additional semantic context (such as ID format examples, valid ranges, or lookup sources) beyond what the schema provides, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Check how...voted'), target resource (U.S. member of Congress vote), and required inputs (Bioguide ID and policy ID). It effectively distinguishes itself from siblings like get_member (profile data) and get_policy (policy details) by focusing specifically on vote records.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies prerequisites by specifying the need for Bioguide ID and policy ID, but offers no explicit guidance on when to use this versus alternatives (e.g., 'use search_members first to find the Bioguide ID') or workflow sequencing. Users must infer the lookup requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_policyGet Policy DetailsARead-onlyInspect
Retrieve detailed information about a specific U.S. federal policy by its ID (e.g., "bill-119-hr-22" or "eo-2025-1234").
| Name | Required | Description | Default |
|---|---|---|---|
| policy_id | Yes | Policy ID (e.g., "bill-119-hr-22" or "eo-2025-1234") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, confirming safe read operations. The description adds the ID format examples (bill-119-hr-22, eo-2025-1234), but these examples are already present in the schema property description, so the description adds minimal unique behavioral context beyond what structured fields provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is front-loaded with the action, specifies the resource, identifies the key parameter mechanism (by its ID), and includes helpful format examples. Every element earns its place with no redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter, read-only operation, no output schema), the description is complete. It adequately explains what is retrieved and how to identify the resource without needing to describe return values or complex behavioral side effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameter is fully documented in the schema itself. The description repeats the ID format examples found in the schema but does not add additional semantic meaning, constraints, or usage guidance for the parameter beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb (Retrieve), resource (U.S. federal policy), and scope (detailed information). It effectively distinguishes from siblings like search_policies (implied by 'specific...by its ID' vs search) and get_policy_text (detailed info vs text).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'by its ID' provides clear context that this tool is for direct lookups when a specific identifier is known. However, it does not explicitly mention sibling alternatives like search_policies for when the ID is unknown, or get_policy_text for when only the text content is needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_policy_textGet Policy Full TextARead-onlyInspect
Retrieve the full text of a U.S. federal policy document. Returns the raw legislative or regulatory text.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | Text format | markdown |
| policy_id | Yes | Policy ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, indicating a safe read operation. The description adds that it 'Returns the raw legislative or regulatory text,' which clarifies the content format (unprocessed text). However, it omits details about text length limits, encoding, or whether the full text could be extremely large.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description consists of two efficient sentences with zero waste. The first sentence front-loads the core action and resource; the second clarifies the return value. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, 100% schema coverage, readOnly annotation), the description is sufficiently complete. It compensates for the missing output schema by specifying that 'raw' text is returned. However, it could briefly mention the format parameter's impact on the returned text.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with 'policy_id' and 'format' (enum: markdown/plain) already documented in the schema. The description does not add semantic context beyond the schema (e.g., it doesn't explain that format defaults to markdown or when to choose plain), warranting the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool 'Retrieve[s] the full text of a U.S. federal policy document' using specific verb and resource. The emphasis on 'full text' and 'raw legislative or regulatory text' effectively distinguishes it from the sibling tool 'get_policy' (which likely returns metadata/summary) and 'search_policies'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description provides no guidance on when to use this tool versus siblings like 'get_policy' or 'search_policies'. No prerequisites (e.g., needing a valid policy_id from search results) or exclusions are mentioned. The agent must infer usage context solely from the name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_representativesGet RepresentativesARead-onlyInspect
Look up elected representatives for a given U.S. address, including federal, state, and local officials.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Street address (e.g., "1600 Pennsylvania Ave, Washington DC") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotation declares readOnlyHint=true, and the description confirms this with 'Look up.' It adds valuable behavioral context beyond the annotation by specifying the scope of returned data: 'including federal, state, and local officials,' which helps the agent understand the breadth of the response.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence of 12 words with zero redundancy. Every phrase serves a purpose: the action ('Look up'), the resource ('elected representatives'), the input method ('for a given U.S. address'), and the scope ('federal, state, and local officials').
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 parameter, 100% schema coverage, simple string input), the readOnly annotation, and the absence of an output schema, the description provides sufficient context for an agent to invoke the tool correctly. A minor gap remains in not describing the return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds value by constraining the address parameter to 'U.S. address' (geographic specificity not explicitly stated in the schema example) and contextualizing it as the lookup key, warranting a score above baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Look up' with the resource 'elected representatives' and clearly scopes the operation to 'a given U.S. address.' This distinguishes it from sibling tools like get_member (likely ID-based) and search_members (likely query-based) by specifying the address-based lookup method.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying the input requirement ('for a given U.S. address'), which signals when to use this tool versus ID-based alternatives. However, it lacks explicit when-not guidance or named alternatives (e.g., it doesn't clarify when to use get_voter_info versus this tool).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_storiesGet Story BundlesARead-onlyInspect
List current story bundles tracking ongoing policy issues. Returns a feed of active stories grouping related policies and developments.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of stories |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, confirming the safe read operation. The description adds valuable behavioral context not in annotations: it describes the return format ('feed') and the grouping logic ('grouping related policies and developments'), helping the agent understand the data structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence front-loads the core purpose (listing bundles), while the second efficiently describes the return behavior and grouping semantics. No redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only list operation with one optional parameter and no output schema, the description adequately covers the tool's function and return behavior. It appropriately omits parameter details (covered by schema) and reasonably omits output schema details (none provided).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'limit' parameter, the schema fully documents the interface. The description adds no parameter-specific details, which is acceptable given the baseline of 3 for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('List') and resource ('story bundles'), with clear scope ('current', 'ongoing policy issues'). It implicitly distinguishes from sibling 'get_story' via plural 'bundles' and the concept of grouping, though it doesn't explicitly state when to use the singular vs. plural variant.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context through terms like 'current', 'active', and 'ongoing', suggesting temporal relevance. However, it lacks explicit guidance on when to use this versus 'get_story' (singular) or 'search_policies', leaving the agent to infer from the 'bundles' concept.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_storyGet Story DetailsARead-onlyInspect
Retrieve detailed information about a specific story bundle by its ID, including the full timeline of related events.
| Name | Required | Description | Default |
|---|---|---|---|
| bundle_id | Yes | Story bundle ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true (safe read), the description adds valuable behavioral context by specifying the response includes 'the full timeline of related events', giving insight into the data structure returned without contradicting the read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, efficiently constructed sentence with zero waste. Front-loaded with action verb 'Retrieve', immediately specifying what (detailed information), target (story bundle), method (by ID), and bonus context (timeline) without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 1-parameter read operation with good annotations, the description is adequate. It compensates for lack of output schema by mentioning 'full timeline'. Minor gap: does not mention error handling (e.g., invalid ID) or that 'bundle' refers to the story aggregation concept.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (bundle_id described as 'Story bundle ID'), the baseline is 3. The description mentions 'by its ID' which aligns with the parameter, but does not add format constraints, examples, or semantic details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Retrieve' with clear resource 'story bundle' and scope 'by its ID, including the full timeline'. This effectively distinguishes from sibling 'get_stories' (plural/list) by emphasizing 'specific' and ID-based lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'specific story bundle by its ID' provides clear context that this tool is for direct ID lookups, implicitly contrasting with 'get_stories' (likely for listing/searching). However, it does not explicitly state when to use the sibling tool or how to obtain a bundle_id.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_voter_infoGet Voter InformationARead-onlyInspect
Get election and polling location information for a registered U.S. voter address, including upcoming elections and early vote sites.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Registered voter address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description is consistent with the readOnlyHint annotation (using 'Get' and describing informational retrieval). It adds valuable specificity about the data returned (early vote sites, upcoming elections) that annotations don't cover. However, it omits behavioral details like error handling for invalid addresses or non-U.S. addresses, and lacks pagination or rate limit context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the action and scope. Every clause earns its place: the main clause defines the core function, while the prepositional phrase and 'including' clause specify input requirements and output details without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single string parameter, read-only operation) and available annotations, the description adequately covers the functional scope and expected data types. While it cannot compensate for the missing output schema, it successfully enumerates the key information categories (elections, polling locations, early vote sites) that would be returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is appropriately 3. The description reinforces the 'address' parameter by adding the 'U.S.' context and linking it to the output, but does not add syntax guidance (e.g., 'full street address with zip code') or format examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and clearly identifies the resource (election and polling location information, including upcoming elections and early vote sites) and scope (registered U.S. voter address). It effectively distinguishes itself from sibling tools like get_member or get_policy by focusing on voter-specific civic data rather than legislative information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context by specifying the input requirement ('registered U.S. voter address'), which signals when to use the tool. However, it lacks explicit guidance on when to use this versus similar civic tools like get_representatives, or what constitutes a valid registered voter address format.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_membersSearch Members of CongressARead-onlyInspect
Search current U.S. members of Congress by name, state, party, or chamber. Returns a list of matching members with key details.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results | |
| party | No | Political party | |
| query | No | Search by name | |
| state | No | Two-letter state code (e.g., "CA") | |
| chamber | No | Congressional chamber |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While the annotation establishes the read-only safety profile, the description adds valuable behavioral context by specifying the return format ('Returns a list of matching members with key details'). This compensates for the missing output schema. However, it omits pagination behavior and filtering logic (AND vs OR).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences: the first declares the filtering capabilities, the second the return format. There is no redundant or wasted text; every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description adequately covers the tool's core function and return type given the lack of an output schema. However, it should mention that all parameters are optional (0 required) and clarify how multiple filters interact (e.g., conjunctive filtering).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description maps conceptual search criteria ('name, state, party, or chamber') to the tool's purpose but does not add semantic meaning beyond the schema's own descriptions (e.g., it doesn't clarify that 'query' is fuzzy search while others are exact filters).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Search'), resource ('current U.S. members of Congress'), and searchable dimensions ('name, state, party, or chamber'). It implicitly distinguishes from the sibling 'get_member' through the plural 'members' and verb 'Search' versus 'Get', though it does not explicitly clarify when to use one versus the other.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to prefer this tool over siblings like 'get_member' (likely for specific ID-based retrieval) or 'get_representatives'. It also fails to note that all parameters are optional, which is critical for a search tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_policiesSearch PoliciesBRead-onlyInspect
Search U.S. federal policies and legislation by keyword, topic, or bill name. Returns a list of matching policies with summaries.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort order | importance |
| limit | No | Number of results (max 25) | |
| query | Yes | Search query (bill name, topic, keyword) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, indicating a safe read operation. The description adds valuable context that results include 'summaries' (distinguishing from full text retrieval) and specifies the scope as 'U.S. federal'. However, it lacks details on rate limits, pagination behavior beyond the limit parameter, or handling of zero-result scenarios.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. It is front-loaded with the core action ('Search U.S. federal policies') and immediately follows with input methods and return format. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description appropriately discloses that the tool returns 'a list of matching policies with summaries', providing essential context about the return structure. For a three-parameter search tool with simple types and read-only behavior, this is adequately complete, though specifying the data source or coverage dates could further enhance it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all parameters adequately (query accepts bill name/topic/keyword, sort options, limit bounds). The description reinforces the query parameter's multi-modal usage but does not add significant semantic value beyond what the schema provides, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches 'U.S. federal policies and legislation' using specific criteria (keyword, topic, bill name) and identifies the return type (list with summaries). However, it does not explicitly differentiate from sibling tool 'get_policy', which likely retrieves specific policies by identifier rather than searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_policy' or 'get_policy_text'. It does not mention prerequisites (e.g., when to search vs. retrieve directly) or exclusion criteria that would help an agent select the correct tool from the available set.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!