UK Parliament MCP Server from MCPBundles
Server Details
Search MPs and Lords, fetch profiles, synopses, and Westminster constituencies
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- thinkchainai/mcpbundles
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose: two are for retrieving member details (one full, one synopsis), and two are for searching (members and constituencies). There is no overlap in functionality, and the descriptions clearly differentiate between search and get operations, making tool selection unambiguous for an agent.
All tool names follow a consistent 'parl-[action]-[resource]-360' pattern using kebab-case. This includes 'get' for retrieval, 'search' for queries, and consistent resource naming ('member', 'constituencies'), providing a predictable and readable naming convention throughout the set.
With 4 tools, the server is well-scoped for its parliamentary data domain, covering key operations like search and retrieval. It feels slightly thin, as it lacks update or delete operations, but this is reasonable given the likely read-only nature of the data, and each tool earns its place without redundancy.
The tool set provides good coverage for querying and retrieving parliamentary data, including member details and constituency searches. A minor gap exists in not offering direct CRUD operations (e.g., create, update), but this is likely intentional for a public data source, and agents can work effectively with the provided search and get functions.
Available Tools
4 toolsparl-get-member-360ARead-onlyIdempotentInspect
Get full details for one UK Parliament member by ID: names, party, house membership, and thumbnail URL.
| Name | Required | Description | Default |
|---|---|---|---|
| member_id | Yes | Parliament member ID (from search or external references). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds useful context by specifying the types of details returned (names, party, etc.), which goes beyond what annotations provide. However, it doesn't mention potential limitations like rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and includes all essential information without any wasted words. Every element (action, resource, scope) earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read operation with good annotations and a well-documented single parameter, the description provides adequate context about what information is returned. However, without an output schema, it could benefit from more detail about the return format (e.g., structure of the response). The description covers the essentials but leaves some ambiguity about the exact output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'member_id' fully documented in the schema. The description doesn't add any additional semantic information about the parameter beyond what's already in the schema (e.g., format examples, source constraints). Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get full details'), the resource ('one UK Parliament member'), and the scope of information returned ('names, party, house membership, and thumbnail URL'). It distinguishes this from sibling tools like parl-search-members-360 by focusing on retrieving detailed information for a specific member rather than searching across multiple members.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool ('by ID') versus alternatives like search tools, but it doesn't explicitly name the sibling tools or state when not to use it. The context is clear for retrieving details of a known member, but lacks explicit exclusion guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
parl-get-member-synopsis-360ARead-onlyIdempotentInspect
Get the official plain-language synopsis for a member (HTML string with links). Use after resolving an ID via member search.
| Name | Required | Description | Default |
|---|---|---|---|
| member_id | Yes | Parliament member ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context about the output format ('HTML string with links'), which isn't captured in annotations, enhancing behavioral understanding beyond the structured hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences with zero waste. The first sentence states the purpose and output format, and the second provides critical usage guidance, making it front-loaded and efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema) and rich annotations, the description is mostly complete. It covers purpose, usage context, and output format, though it could optionally mention that the tool is safe and idempotent (already in annotations) for extra clarity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'member_id' fully documented in the schema as 'Parliament member ID.' The description doesn't add any parameter-specific details beyond what the schema provides, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get'), resource ('official plain-language synopsis for a member'), and output format ('HTML string with links'). It distinguishes this tool from siblings like 'parl-get-member-360' by focusing on the synopsis rather than general member data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides when-to-use guidance: 'Use after resolving an ID via member search.' This directly references the sibling tool 'parl-search-members-360' as a prerequisite, offering clear context for proper workflow sequencing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
parl-search-constituencies-360ARead-onlyIdempotentInspect
Search UK Westminster constituencies by name. Results may include current MP representation where available.
| Name | Required | Description | Default |
|---|---|---|---|
| skip | No | Number of results to skip for pagination (default 0). | |
| take | No | Page size (default 5, max 20). | |
| search_text | Yes | Constituency name or fragment to search. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds useful behavioral context beyond annotations: it specifies that results 'may include current MP representation where available,' which helps set expectations about partial or conditional data in responses.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences with zero waste: the first states the core purpose, and the second adds important behavioral context about MP representation. It's front-loaded and efficiently communicates essential information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with good annotations (read-only, idempotent) and full schema coverage, the description is mostly complete. It covers purpose and adds context about MP representation. However, without an output schema, it could benefit from more detail on result format or pagination behavior, though the schema handles parameters well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters well-documented in the schema (skip for pagination, take for page size with max 20, search_text for name/fragment). The description mentions searching 'by name' and 'fragment,' aligning with search_text, but adds no significant semantic value beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search UK Westminster constituencies by name') and resource ('constituencies'), distinguishing it from sibling tools like parl-search-members-360 (which searches members) and parl-get-member-360 (which retrieves specific members). It adds valuable context about including MP representation where available.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching constituencies by name or fragment, but doesn't explicitly state when to use this tool versus alternatives like parl-search-members-360 for member searches. It provides clear context (searching constituencies) but lacks explicit exclusions or comparison to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
parl-search-members-360ARead-onlyIdempotentInspect
Search UK Parliament members (Commons and Lords) by name. Returns a paginated list with party, house, and membership status.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Name text to search (e.g. surname or full name). | |
| skip | No | Number of results to skip for pagination (default 0). | |
| take | No | Page size (default 5, max 20). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, and non-destructive behavior, so the description doesn't need to repeat these. It adds valuable context beyond annotations by specifying the return format ('paginated list with party, house, and membership status') and the search scope ('Commons and Lords'), which helps the agent understand the tool's behavior and output structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose, scope, and output. It's front-loaded with the core functionality and includes no redundant information, making it highly concise and effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search with pagination), rich annotations (read-only, idempotent), and full schema coverage, the description is mostly complete. It specifies the return content but lacks details on output format (e.g., JSON structure) and error handling. Since there's no output schema, some gaps remain, but it's sufficient for basic usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for all parameters (name, skip, take). The description adds minimal semantic value beyond the schema, only implying that 'name' is used for searching members. It doesn't provide additional details like search algorithm or result ordering, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search'), resource ('UK Parliament members'), and scope ('by name'), distinguishing it from sibling tools like 'parl-get-member-360' (which likely retrieves a specific member) and 'parl-search-constituencies-360' (which searches constituencies instead of members). It specifies both Commons and Lords houses, making the purpose explicit and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Search UK Parliament members... by name'), implying it's for name-based searches rather than other criteria. However, it doesn't explicitly state when not to use it or name alternatives (e.g., 'parl-get-member-360' for retrieving a specific member by ID), which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!