UK Legislation MCP Server from MCPBundles
Server Details
Search UK Acts, Statutory Instruments, and legislation with full text retrieval
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- thinkchainai/mcpbundles
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose: get-act retrieves metadata, get-contents provides the table of contents, get-section fetches specific section text, and search performs keyword searches. There is no overlap in functionality, making it easy for an agent to select the correct tool based on the task.
All tool names follow a consistent pattern of 'legislation-verb-noun-8e9', using the same prefix, verb-noun structure, and suffix. This predictability enhances readability and reduces confusion for agents interacting with the server.
With 4 tools, the server is well-scoped for its purpose of accessing UK legislation. Each tool serves a distinct and essential function (metadata, structure, content, and search), ensuring a focused and efficient toolset without unnecessary complexity.
The toolset provides complete coverage for the domain of UK legislation access: it includes search capabilities, metadata retrieval, structural overview, and detailed content extraction. There are no obvious gaps, as agents can navigate from search to metadata to contents to specific sections seamlessly.
Available Tools
4 toolslegislation-get-act-8e9ARead-onlyIdempotentInspect
Get metadata for a specific UK Act or Statutory Instrument. Specify the legislation type (e.g. 'ukpga'), year, and number. Returns title, description, enactment date, and available formats.
| Name | Required | Description | Default |
|---|---|---|---|
| year | Yes | Year of enactment (e.g. 2010). | |
| number | Yes | Legislation number within that year (e.g. 15 for the Equality Act 2010). | |
| legislation_type | Yes | Type of legislation, e.g. 'ukpga' (Public General Act), 'uksi' (Statutory Instrument). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, and non-destructive behavior, so the description adds limited value. It mentions the return format ('title, description, enactment date, and available formats'), which provides useful context beyond annotations, but does not cover aspects like error handling, rate limits, or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by parameter guidance and return details. It uses two efficient sentences with no redundant information, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (3 required parameters, no output schema), the description is mostly complete: it states the purpose, parameters, and return values. However, it lacks explicit guidance on sibling tool differentiation and does not mention potential errors or edge cases, leaving minor gaps for an agent to infer usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with each parameter well-documented in the schema (e.g., 'legislation_type' enum values explained). The description adds minimal semantics by listing the parameters ('legislation type, year, and number') and giving examples, but does not provide additional meaning beyond what the schema already covers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get metadata'), resource ('UK Act or Statutory Instrument'), and scope ('specific'), distinguishing it from sibling tools like 'legislation-search-8e9' (search) and 'legislation-get-contents-8e9' (get contents). It uses precise terminology like 'metadata' and lists return fields.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by specifying that it's for 'a specific UK Act or Statutory Instrument' and lists required parameters, implying usage when these identifiers are known. However, it does not explicitly state when to use this tool versus alternatives like 'legislation-search-8e9' for broader queries or 'legislation-get-section-8e9' for detailed content.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
legislation-get-contents-8e9ARead-onlyIdempotentInspect
Get the table of contents for a UK Act or Statutory Instrument. Returns the hierarchical structure of parts, chapters, and sections with their titles and section numbers.
| Name | Required | Description | Default |
|---|---|---|---|
| year | Yes | Year of enactment (e.g. 2010). | |
| number | Yes | Legislation number (e.g. 15). | |
| legislation_type | Yes | Type of legislation, e.g. 'ukpga' (Public General Act), 'uksi' (Statutory Instrument). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating safe, repeatable operations. The description adds valuable context about what the tool returns (hierarchical structure with titles and numbers), which goes beyond annotations. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: the first states the purpose and scope, the second details the return value. Every word contributes meaning with zero waste, and information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with good annotations and full schema coverage, the description is mostly complete. It explains what the tool does and what it returns. However, without an output schema, it could benefit from more detail on return format (e.g., JSON structure), but the hierarchical description is sufficient for basic understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for all three parameters. The description doesn't add any parameter-specific information beyond what's in the schema, but it does reinforce that parameters identify specific legislation. Baseline 3 is appropriate since the schema fully documents parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get the table of contents'), resource ('for a UK Act or Statutory Instrument'), and output ('hierarchical structure of parts, chapters, and sections with their titles and section numbers'). It distinguishes from siblings by focusing on table of contents rather than full acts, specific sections, or search functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying it's for UK legislation and returns hierarchical structure, suggesting it's for navigation rather than content retrieval. However, it doesn't explicitly state when to use this tool versus alternatives like legislation-get-act-8e9 or legislation-get-section-8e9, though the focus on table of contents provides some differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
legislation-get-section-8e9ARead-onlyIdempotentInspect
Get the text of a specific section of UK legislation. Returns the section content as readable text with subsection numbering. Includes amendment notes and commencement information.
| Name | Required | Description | Default |
|---|---|---|---|
| year | Yes | Year of enactment (e.g. 2010). | |
| number | Yes | Legislation number (e.g. 15). | |
| section | Yes | Section identifier (e.g. '1', '149', '1A'). For Statutory Instruments, use 'regulation/3' format. | |
| legislation_type | Yes | Type of legislation, e.g. 'ukpga' (Public General Act), 'uksi' (Statutory Instrument). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond annotations by specifying the return format ('readable text with subsection numbering') and included details ('amendment notes and commencement information'), which helps the agent understand the output behavior. No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences that are front-loaded with the core purpose and efficiently detail the return content. Every sentence adds value—the first states the action and resource, the second specifies output details—with no wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 required parameters) and rich annotations (covering safety and idempotency), the description is mostly complete. It clarifies the output format, which compensates for the lack of an output schema. However, it could be more complete by explicitly differentiating from sibling tools or mentioning any limitations (e.g., availability of sections).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters well-documented in the schema (e.g., 'legislation_type' with enum values, 'section' with format examples). The description doesn't add any parameter-specific details beyond what the schema provides, so it meets the baseline for high schema coverage without compensating further.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get the text'), resource ('a specific section of UK legislation'), and output format ('readable text with subsection numbering, amendment notes, commencement information'). It distinguishes from sibling tools like 'legislation-get-act-8e9' (likely for entire acts) and 'legislation-search-8e9' (likely for searching) by focusing on retrieving a single section's text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need the text of a specific section, but it doesn't explicitly state when to use this tool versus alternatives like 'legislation-get-contents-8e9' (likely for table of contents) or 'legislation-search-8e9'. No exclusions or prerequisites are mentioned, leaving some ambiguity about the optimal context for this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
legislation-search-8e9ARead-onlyIdempotentInspect
Search UK legislation (Acts of Parliament, Statutory Instruments, etc.) by title keyword. Returns matching legislation with title, summary, year, number, and links.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number for pagination (1-based, default 1). | |
| year | No | Filter by year of enactment (e.g. 2010). | |
| title | Yes | Search term to match against legislation titles. | |
| results_count | No | Number of results to return (1-20, default 5). | |
| legislation_type | No | Type of legislation to search. Common values: 'ukpga' (Public General Act), 'uksi' (Statutory Instrument), 'asp' (Scottish Act), 'asc' (Welsh Act). Leave empty to search all types. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond annotations by specifying the return format ('title, summary, year, number, and links') and search scope ('UK legislation'), which helps the agent understand what to expect without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by return details in the second. Both sentences are necessary and efficient, with no redundant information, making it appropriately sized and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search with filtering), rich annotations, and 100% schema coverage, the description is mostly complete. It lacks an output schema, but describes return values. However, it could improve by mentioning pagination or result limits implied by parameters, though annotations cover key behavioral traits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds minimal semantic value beyond the schema by mentioning 'title keyword' search, but doesn't provide additional details like search behavior (e.g., partial matches) or parameter interactions. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search UK legislation'), resource ('Acts of Parliament, Statutory Instruments'), and mechanism ('by title keyword'), distinguishing it from sibling tools like 'legislation-get-act-8e9' which likely retrieve specific legislation rather than searching. It explicitly mentions what types of legislation are included.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching by title keyword, but doesn't explicitly state when to use this tool versus the sibling tools (e.g., 'legislation-get-act-8e9' for retrieving a specific act). It provides some context by mentioning what it returns, but lacks explicit guidance on alternatives or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!