CongressMCP-full
Server Quality Checklist
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v2.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 20 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal detail. It mentions 'Returns professional-grade research data' but does not disclose rate limits, caching behavior, authentication requirements, or whether operations are read-only versus mutating. The mention of 'enhanced analytics' is vague without quantitative or qualitative context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The structure uses clear section headers and bullet points, which aids scannability. However, the opening sentence repeats the title verbatim, and the six operation bullets consume space without explicitly mapping to the required 'operation' parameter values. The 'Key params' line at the end is redundant given the schema already lists parameter names.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 10 parameters with zero schema documentation and a required 'operation' switch parameter, the description is incomplete. It fails to specify valid values for the 'operation' parameter (critical for invocation), explain parameter dependencies, or document the four parameters not mentioned in the 'Key params' list. While an output schema exists (reducing the need for return value documentation), the input contract remains severely under-specified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (all ten parameters lack descriptions), requiring the description to compensate heavily. While it lists six operation names that presumably map to the 'operation' parameter, it does not explicitly state these are valid enum values, nor does it explain which parameters apply to which operations (e.g., does 'report_number' work with 'get_congress_info'?). The other four parameters (current, limit, detailed, format_type) are completely undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource domain (CRS reports, Congress analytics) and lists six specific operations, but frames the tool as a generic 'Access' mechanism rather than clarifying it acts as a router/switch based on the 'operation' parameter. It does not effectively distinguish from siblings like 'bills' or 'committee_intelligence' which also access congressional data, leaving ambiguity about when to use this aggregate tool versus specific alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus the seventeen sibling tools (e.g., when to use 'search_crs_reports' via this tool versus other research tools). No mention of prerequisites, required parameter combinations, or workflow context. The six operation bullets imply capabilities but do not constitute usage guidelines.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It mentions 'content chunking' and 'enhanced metadata' which adds useful behavioral context, but omits critical safety information (read-only vs. destructive), rate limits, or authentication requirements implied by 'Professional access'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The structured bullet format for categories is helpful, but listing operation counts (e.g., '7 operations') without naming them wastes space. The 'Key params' section is good front-loading, though the opening title repetition is redundant.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (19 parameters, polymorphic operation mode, 0% schema coverage), the description is insufficient. It categorizes operations but doesn't enumerate them or explain selection logic, leaving critical gaps for an agent attempting to construct valid invocations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fails to compensate adequately. While it lists 'Key params', it does not explain the crucial 'operation' parameter's valid values (the 19 sub-operations), nor explain chunking semantics, date formats, or parameter interactions required for this complex multi-mode tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool provides access to committee documents and activities, breaking functionality into three distinct categories (Reports, Prints, Meetings). However, it fails to clearly articulate that this is a polymorphic tool where the 'operation' parameter selects between 19 distinct sub-operations, which is critical for correct invocation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus siblings like 'get_committee_reports' or 'search_committees', despite clear functional overlap. There is no explanation of how to select appropriate operations or prerequisites for using specific document retrieval modes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, leaving full burden on description. Mentions 'List of matching committees with basic information' but fails to clarify pagination behavior, default sorting, what 'basic information' includes, or error handling when no matches exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured Args/Returns format which is readable, but opens with redundant 'Search for congressional committees' that restates the tool name. The Returns section is somewhat superfluous given the output schema exists.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Acceptable for a search tool with output schema present (reducing need for detailed return documentation). Compensates for zero schema coverage via Args section, but gaps remain in behavioral details and sibling differentiation necessary for a tool with 15+ siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the Args section provides essential value: documents chamber values ('House', 'Senate', 'Joint'), committee_type examples ('Standing', 'Select'), and limit purpose. However, 'etc.' for committee_type leaves values incomplete, and no format constraints are specified.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
States the core action ('Search for congressional committees') but lacks specificity regarding what distinguishes this from sibling tools like committee_intelligence or get_committee_bills. The Args section implies search by chamber/type, but the opening statement is vague and tautological with the tool name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus alternatives (e.g., get_committee_bills for legislative activity vs. this for organizational listing). No mention of prerequisites or filtering strategies when all parameters are optional.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention critical traits like whether this is a read-only operation, rate limiting, pagination behavior beyond the limit parameter, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The docstring format with Args/Returns sections is structured, but the inclusion of 'ctx: Context for API requests' is wasteful since ctx is not in the input schema and agents cannot provide it. The rest of the content earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple two-parameter retrieval tool with an output schema available (per context signals), the description is minimally adequate. It compensates for the lack of schema descriptions but could improve by clarifying what constitutes a 'report' in this domain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage, the description effectively compensates by providing concrete examples for committee_code ('HSJU', 'SSJU') and clarifying that limit controls the maximum number of reports returned. However, it wastefully includes 'ctx' which is not present in the input schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'reports issued by a specific committee,' distinguishing itself from sibling tools like get_committee_bills or get_committee_communications by specifying the document type (reports). However, it lacks specificity about what kind of reports (e.g., legislative, oversight) which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like get_committee_communications or committee_intelligence, nor does it mention prerequisites (e.g., needing a valid committee_code from search_committees first).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It partially succeeds by stating the return format ('structured record/hearing data with full text content'), but fails to mention read-only status, rate limits, pagination behavior, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses clear visual hierarchy with category headers (CONGRESSIONAL RECORDS, COMMUNICATIONS, HEARINGS) and bullet points for scanability. The 'Key params' section is usefully front-loaded. Minor deduction for repeating the title in the first sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given high complexity (17 parameters, 0% schema coverage) and presence of an output schema, the description adequately catalogs available operations but insufficiently documents parameter interactions. The return value description is acceptable given the output schema exists, but parameter documentation gaps are significant for a tool with this many arguments.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (titles only, no descriptions). The description lists 'Key params' by name but provides no semantic explanation—critical parameters like 'jacket_number', 'communication_type', and date formats (for 'from_date_time') remain undefined. The mapping between the bulleted operation names and the 'operation' parameter is implied but not explicit.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the three resource domains (Congressional Records, Communications, Hearings) and uses specific verbs (search, get, access). It distinguishes from siblings like 'bills' or 'amendments' by focusing on legislative records and hearing content rather than legislative text or voting data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description catalogs 16 specific operations, it provides no explicit guidance on when to use this tool versus sibling tools like 'get_committee_communications' or 'get_committee_bills'. It lacks prerequisites, exclusions, or decision criteria for selecting between overlapping functionality.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses return structure ('structured vote/nomination data with member details and legislative actions'), which is helpful. However, it omits safety profile (read-only vs destructive), rate limits, or authentication requirements typical for government APIs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear section headers (HOUSE VOTING, NOMINATIONS) and bullet points. Information is front-loaded with the core purpose. The operation enumeration is dense but organized efficiently without verbose prose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (11 parameters, 13 sub-operations) and presence of an output schema, the description adequately covers the operation catalog and return data categories. However, with 0% schema coverage and no annotations, the lack of parameter semantics leaves a significant gap for a tool requiring specific operation selection and filtering logic.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While it lists 'Key params' by name (operation, congress, session, etc.), it provides no semantic explanation for ambiguous fields like 'ordinal', 'session', or 'sort' syntax. The parameter list merely repeats parameter names without clarifying usage patterns.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides 'Access House votes and presidential nominations' and enumerates 13 specific operations (6 voting, 7 nominations). However, it does not explicitly distinguish from sibling tools like get_committee_nominations or committee_intelligence, which also handle congressional nomination data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description lists available operations, it provides no guidance on when to use this tool versus siblings like get_committee_nominations or records_and_hearings. No prerequisites, filtering recommendations, or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While it mentions returning a 'List of nominations,' it omits critical behavioral traits such as pagination behavior, sorting order, rate limits, or error conditions (e.g., invalid committee codes). The mention of 'ctx' parameter is extraneous as it does not appear in the input schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The Args/Returns structure is clear and scannable. The first sentence immediately conveys purpose. However, including 'ctx' in Args adds slight confusion since it is not part of the user-facing input schema, and the Returns section is somewhat tautological ('List of nominations referred to the committee').
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists (per context signals), the brief return description is acceptable. Both input parameters are documented despite zero schema coverage. However, for a specialized government data tool, the description lacks context on what constitutes a 'nomination' in this domain and omits pagination details that would be necessary for handling large result sets.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage, the description effectively documents both parameters: committee_code includes concrete examples ('HSJU', 'SSJU') and limit explains its purpose ('Maximum number of nominations'). This successfully compensates for the bare schema, though it could further clarify valid formats or constraints for committee_code.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action ('Get nominations') and scope ('referred to a specific committee'), providing a specific verb and resource. However, it fails to differentiate from sibling committee tools like get_committee_bills or get_committee_communications, leaving the agent to infer this handles nominations specifically.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, nor are prerequisites mentioned (e.g., whether committee_code should be obtained via search_committees first). The description lacks explicit when/when-not conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While it states the return type ('List of bills and resolutions'), it lacks critical behavioral details: pagination behavior when results exceed the limit, default sorting order, error handling for invalid bioguide_ids, or what specific fields are included in the returned bills.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The docstring format with Args and Returns sections is structured and appropriately sized. Each section serves a purpose. Minor deduction for including the non-user-facing 'ctx' parameter in the Args section, which wastes a small amount of cognitive load.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists (documenting return structure externally), the description adequately covers inputs for this 2-parameter tool. However, it lacks important contextual gaps: differentiation from sponsored legislation, pagination cursor handling, and guidance on obtaining bioguide_id values. Minimum viable but clear gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage, the Args section effectively documents both user-facing parameters: bioguide_id is described as a 'Unique bioguide identifier' and limit as 'Maximum number of cosponsored bills to return'. However, it confusingly includes 'ctx' (likely a framework-injected context parameter not present in the input schema), which is noise for the agent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'legislation cosponsored by a specific member of Congress' with a specific verb and resource. However, it fails to explicitly differentiate from the sibling tool 'get_member_sponsored_legislation', which is a critical distinction for an agent selecting between the two.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, particularly the sibling 'get_member_sponsored_legislation'. There are no prerequisites mentioned (e.g., needing a valid bioguide_id from another tool), no exclusion criteria, and no workflow context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It only lists basic parameter names and return type without disclosing pagination behavior (beyond limit), authentication requirements, rate limits, or what constitutes a valid 'congress' number range. The 'ctx' parameter is exposed without explanation of its purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured docstring format (Args/Returns) which is readable. The first sentence efficiently captures the core purpose. However, listing 'ctx' in Args wastes space (appears to be internal infrastructure), and the Returns section merely restates what an output schema already conveys.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a listing endpoint with output schema available. The description covers all parameters, but given the complexity of congressional data (historical vs. current members, chamber distinctions, pagination), it should address whether results include both chambers, how to handle large result sets, or data freshness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Critical compensation for 0% schema description coverage. The description adds valuable semantics for all user-facing parameters: 'congress' includes a helpful example (118 for 118th Congress), 'current_member' clarifies the boolean logic, and 'limit' explains the maximum return constraint. Only 'ctx' is documented with boilerplate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States a clear action (Get) and resource (members) with scope (served in specific Congress). However, it doesn't explicitly differentiate from siblings like 'get_members_by_congress_state_district' or 'get_members_by_state', leaving some ambiguity about when to prefer this over filtered alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus sibling tools (e.g., when to filter by state vs. retrieving all members). No mention of prerequisites, error conditions, or when to use the 'current_member' filter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully documents API constraints (250 result limit for 'API compliance', date format ISO 8601) and notes sub-amendment relationships, but fails to disclose safety characteristics (read-only vs destructive), rate limits, error handling patterns, or pagination behavior beyond the offset parameter existence.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is structurally organized with clear sections but suffers from verbosity and repetition—operations are listed multiple times (in CORE OPERATIONS summary and again in categorized lists). The 'Args' and 'Returns' sections largely replicate schema information. The front-loaded marketing language ('Comprehensive Amendments Tool') wastes space that could clarify behavioral constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (11 parameters, 7+ distinct operations via the operation parameter) and lack of annotations, the description provides minimum viable coverage. It leverages the existence of an output schema to avoid detailing return formats, but misses operational context such as authentication requirements, error scenarios, or the hierarchical relationship between amendment operations (e.g., that details requires type/number from search results).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage (titles only), but the description effectively compensates by documenting 9 of 11 parameters with specific semantics: congress (118/119 examples), amendment_type (hamdt/samdt enums), date format (YYYY-MM-DDTHH:MM:SSZ), limit constraints, and sort options. It notably omits descriptions for 'format' and 'offset' parameters, leaving gaps in the parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as an amendments-focused tool and enumerates seven specific operations (get_amendments, search_amendments, etc.) with their functions. It distinguishes from siblings like 'bills' or 'members' by focusing exclusively on amendment legislative data, though the 'comprehensive' framing slightly obscures that this is a monolithic router tool rather than a single operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description lists available operations grouped by category (Search, Details, Process, Content), it provides no guidance on when to use specific operations versus others (e.g., get_amendments vs search_amendments) or prerequisites for operations requiring specific IDs. No alternatives or exclusions are mentioned to help the agent decide against using sibling tools like 'bills' for related legislative data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It partially satisfies this by describing the return value ('biographical data, terms served'), but omits error handling (e.g., invalid ID), rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The docstring format (Args/Returns) is structured but includes 'ctx: Context for API requests', which describes an internal implementation detail not present in the input schema, adding noise for the agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description mentions return values, the existence of an output schema means this is optional. However, for a tool requiring a specific opaque identifier (bioguide_id), the description should explain how to acquire it (workflow context), which is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, so the description provides essential compensation by documenting the bioguide_id parameter with its purpose ('Unique bioguide identifier') and an example ('B000944'). It loses a point for not explaining how to obtain this ID (e.g., via search_members).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves detailed information about a specific member of Congress using the verb 'Get' and resource 'member'. However, it does not explicitly distinguish this single-member lookup from sibling bulk retrieval tools like get_members_by_congress or search_members.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like search_members (to find the ID) or the various bulk member retrieval siblings. There is no mention of prerequisites, such as needing to obtain a bioguide_id first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully documents the API compliance limit (max 250) and date format constraints (YYYY-MM-DDTHH:MM:SSZ), but fails to disclose safety characteristics (read-only vs. destructive), rate limiting behavior, or error handling patterns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description suffers from structural redundancy, listing operations twice (bullet points under 'TREATIES OPERATIONS' and dash list under 'TREATIES'). The Args section is well-structured, but the opening 'Treaties and Summaries Tool' restates the title without adding value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a polymorphic tool with 12 parameters and an output schema, the description is incomplete. It fails to enumerate valid values for the required 'operation' parameter (only referring to 'list above'), does not describe the output format despite the presence of an output schema, and omits documentation for two parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description compensates well by explaining 10 of 12 parameters, including specific enum-like values for 'sort' (updateDate+desc/asc) and congress number examples (118, 119). However, it omits explanations for 'format' and 'offset' parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool provides 'Focused access to treaties and bill summaries' and enumerates specific operations (search_treaties, get_treaty_text, etc.). However, it does not clearly differentiate when to use this tool versus the sibling 'bills' tool for bill summary access.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description lists available operations grouped by category (Search & Discovery, Legislative Process, Content), it provides no guidance on when to use this tool versus alternatives like 'bills', nor does it specify prerequisites or selection criteria for choosing between operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clarifies the relationship scope ('referred to or reported by') but fails to address safety characteristics (read-only vs. destructive), error handling for invalid committee codes, pagination behavior beyond the 'limit' parameter, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The Args/Returns structure is clear and front-loaded with the core purpose. It efficiently documents parameters without excessive verbosity. However, mentioning 'ctx' (an internal context parameter) adds noise that AI agents typically don't need to see.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a two-parameter tool with an output schema available, the description is minimally adequate. It covers the basic parameters but lacks contextual nuances such as whether 'referred to' and 'reported by' are inclusive filters, the time range of returned bills, or how to handle committees with no bills.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description effectively compensates by documenting both user-facing parameters: it provides concrete examples for committee_code ('HSJU', 'SSJU') and clarifies that limit controls the maximum bills returned. It loses one point for including 'ctx' (likely internal plumbing) and lacking format constraints (e.g., regex for committee codes).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves bills 'referred to or reported by a specific committee,' providing a specific verb and resource. However, it lacks explicit differentiation from sibling tools like 'bills' or 'search_committees' that could also retrieve bill information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as the generic 'bills' tool or 'search_committees.' It omits prerequisites (e.g., needing a valid committee_code) and does not specify if this covers current congress only or historical data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the default behavior of 'current_member' (defaults to True), which is helpful. However, it lacks details on error handling (invalid districts/states), pagination behavior, or how historical data is handled when current_member is false.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses a clear docstring format with Args and Returns sections that efficiently organize information. The first sentence provides immediate clarity. While the Args/Returns structure is slightly formal for MCP descriptions, it remains readable and appropriately sized without filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (per context signals), the brief Returns section is acceptable. The description covers basic functionality but lacks edge case documentation (e.g., at-large districts, historical member depth, invalid input handling) that would be expected for a congressional data API with 0% schema coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage, the Args section effectively compensates by documenting all three parameters: state_code includes format guidance ('Two-letter') and examples, district clarifies it's a 'Congressional district number within the state', and current_member explains the boolean semantics. This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves members representing a specific congressional district using specific verb ('Get') and resource ('member'). However, it doesn't clearly differentiate from the similar sibling tool 'get_members_by_congress_state_district', and there's slight ambiguity between singular ('the member') and plural ('Member(s)') references.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_members_by_state' or 'get_members_by_congress_state_district'. It doesn't specify prerequisites (e.g., needing valid state codes) or exclusion criteria for when this tool shouldn't be used.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It documents default values (e.g., current_member defaults to True) and return type ('List of members'), but omits rate limits, pagination details beyond the limit parameter, authentication requirements, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses a clean docstring format (Args/Returns) that front-loads information efficiently. Minor deduction for referencing 'ctx' (context) which appears to be an internal parameter not present in the input schema, causing slight confusion.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema (per context signals), the description appropriately does not exhaustively detail return values. However, with numerous sibling tools offering overlapping functionality, the description lacks contextual guidance necessary for an agent to select the correct member-retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for the 0% schema description coverage. The description adds critical semantic meaning: provides concrete examples for state_code ('CA', 'TX', 'NY'), clarifies the boolean logic for current_member, and explains the limit parameter's purpose—none of which are described in the schema properties.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'members of Congress from a specific state' with a specific verb and resource. However, it does not explicitly distinguish from siblings like `get_members_by_congress_state_district` or `get_members_by_district`, which also filter members by geographic criteria.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like `search_members` (for name searches) or `get_members_by_district` (for specific districts). The `current_member` parameter implies historical filtering capability, but the description does not articulate selection criteria for this tool over its 15+ siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the return value ('List of bills and resolutions') and the limiting behavior, but lacks information on rate limits, error conditions for invalid bioguide_ids, or whether results include historical or current Congress only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses standard Args/Returns docstring format which is structured and scannable. The inclusion of 'ctx' (not in schema) slightly wastes space, but overall information density is good with clear section headers.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter lookup tool with an output schema, the description adequately covers the basics. However, given the existence of closely related sibling tools and the specialized domain (Congressional bioguide IDs), it should explain how this differs from cosponsored legislation and hint at how to obtain the required identifier.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, so the description compensates by documenting both 'bioguide_id' (as 'Unique bioguide identifier') and 'limit' (as 'Maximum number of sponsored bills'). However, it mentions 'ctx' which is not in the input schema, and doesn't specify format constraints or valid ranges for the parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'legislation sponsored by a specific member of Congress,' providing a specific verb and resource. However, it fails to distinguish from the sibling tool 'get_member_cosponsored_legislation,' which is a critical distinction in legislative data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus 'get_member_cosponsored_legislation' or other member lookup tools. There is no mention of prerequisites (e.g., obtaining a bioguide_id from 'get_member_details' or 'search_members' first).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses useful behavioral traits: bill_id auto-parsing logic and the 250-result API limit. However, it fails to disclose read-only vs. destructive status, error handling behaviors, or rate limiting beyond the limit parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses helpful structural headers (FLEXIBLE BILL IDENTIFICATION, CORE OPERATIONS) and front-loads the important flexible ID feature. However, the CORE OPERATIONS section is a lengthy uncategorized list that could be more concise, and the overall length is substantial relative to the information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (not shown but indicated), the brief 'Returns' statement is acceptable. With 16 parameters and complex operation routing, the description covers the critical path well (bill identification) but leaves gaps in pagination (offset), text chunking (chunk_number/size), and alternative date filtering (days_back), preventing a higher score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. It excellently documents bill_id formats (multiple examples), congress numbering (118/119), bill_type enumerations, sort options, and datetime formats. However, it completely omits 5 parameters: format, offset, days_back, chunk_number, and chunk_size, leaving significant gaps in the 16-parameter schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool manages Congressional bills with specific operations (search, details, text, amendments, etc.) and categorizes them by function (Discovery, Metadata, Text, etc.). However, it doesn't explicitly distinguish when to use this tool versus the standalone 'amendments' sibling tool that appears to overlap with the get_bill_amendments operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear guidance on the two identification patterns (flexible bill_id vs. explicit congress/bill_type/bill_number parameters) with concrete examples. However, it lacks explicit 'when-to-use' guidance regarding sibling alternatives or prerequisites for specific operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context noting 'partial matches supported' for names, but omits safety classification (read-only vs destructive), rate limiting, or behavior when called with no filters (all parameters are optional).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured Args/Returns format that is scannable. The opening sentence 'Search for members of Congress by various criteria' is slightly redundant with the tool name but acceptable. Each parameter description is concise and purposeful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the minimal return value description ('Structured response with member information') is acceptable. However, with 6 optional parameters, the description should clarify whether the tool returns all members when no filters are applied or if certain combinations are required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema description coverage. The Args section provides critical semantics: examples for state codes ('CA', 'NY', 'TX'), enum constraints for party ('D', 'R', 'I') and chamber ('House' or 'Senate'), and clarifies partial matching for names. Fully documents all 6 parameters beyond the schema's empty descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for members of Congress and supports various criteria. However, it fails to differentiate from sibling tools like 'get_members_by_state' or 'get_member_details', leaving ambiguity about which member-retrieval tool to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this broad search tool versus specific alternatives like 'get_members_by_state' or 'get_member_details'. No prerequisites or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a safe read-only operation, lacks pagination behavior details (despite having a limit parameter), and omits rate limits or auth requirements. Only basic return type information is provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with a clear single-sentence purpose, followed by structured Args/Returns sections. While the docstring format is slightly verbose for MCP, it is necessary given the schema coverage gap. No redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a two-parameter retrieval tool with an output schema present (so return values need not be detailed). However, given the lack of annotations, the description should have disclosed safety characteristics and behavioral constraints, which are absent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by documenting both parameters: committee_code includes examples ('HSJU', 'SSJU') and context ('Official committee code'), while limit specifies its purpose ('Maximum number of communications to return').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') plus resource ('communications (letters, statements)') plus scope ('from a specific committee'). The enumerated examples (letters, statements) precisely distinguish this from sibling tools like get_committee_bills or get_committee_nominations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies what the tool retrieves, providing implicit context for selection, but lacks explicit guidance on when to choose this over sibling committee tools (e.g., 'use this for correspondence, use get_committee_bills for legislation').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It adds return value semantics ('Member who represented...') and implies read-only lookup, but lacks disclosure on error behavior (e.g., invalid district numbers), rate limits, or whether the member object includes full biographical details or just identifiers.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses efficient docstring format with Args and Returns sections. Every sentence earns its place: opening line defines purpose, Args section provides examples for all parameters, Returns clarifies the single object response. No redundant or wasted language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a lookup tool with output schema present. Covers all parameters and basic return semantics. Minor gap: does not specify behavior for invalid combinations (e.g., non-existent district numbers) or whether the tool validates that the district existed in that Congress.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by documenting all three parameters with concrete examples (e.g., '118 for 118th Congress', 'CA', 'TX', 'NY') and clarifying constraints (two-letter state code, district number within state) that the schema does not provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves 'the member representing a specific congressional district in a specific Congress' with specific verb (Get) and resource (member). The parameter combination (congress+state+district) inherently distinguishes it from siblings like get_members_by_congress (all members) or get_members_by_state (state-wide).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternative recommendations provided. While the specific parameter combination implies usage for targeted historical lookups versus bulk retrieval, the description does not explicitly contrast with siblings like get_members_by_district or note that this returns a single member versus lists from other endpoints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/amurshak/congressMCP'
If you have feedback or need assistance with the MCP directory API, please join our Discord server