Skip to main content
Glama

UK Legislation MCP Server from MCPBundles

Ownership verified

Server Details

Search UK Acts, Statutory Instruments, and legislation with full text retrieval

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
thinkchainai/mcpbundles
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get-act retrieves metadata, get-contents provides the table of contents, get-section fetches specific section text, and search performs keyword searches. There is no overlap in functionality, making it easy for an agent to select the correct tool based on the task.

Naming Consistency5/5

All tool names follow a consistent pattern of 'legislation-verb-noun-8e9', using the same prefix, verb-noun structure, and suffix. This predictability enhances readability and reduces confusion for agents interacting with the server.

Tool Count5/5

With 4 tools, the server is well-scoped for its purpose of accessing UK legislation. Each tool serves a distinct and essential function (metadata, structure, content, and search), ensuring a focused and efficient toolset without unnecessary complexity.

Completeness5/5

The toolset provides complete coverage for the domain of UK legislation access: it includes search capabilities, metadata retrieval, structural overview, and detailed content extraction. There are no obvious gaps, as agents can navigate from search to metadata to contents to specific sections seamlessly.

Available Tools

4 tools
legislation-get-act-8e9A
Read-onlyIdempotent
Inspect

Get metadata for a specific UK Act or Statutory Instrument. Specify the legislation type (e.g. 'ukpga'), year, and number. Returns title, description, enactment date, and available formats.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearYesYear of enactment (e.g. 2010).
numberYesLegislation number within that year (e.g. 15 for the Equality Act 2010).
legislation_typeYesType of legislation, e.g. 'ukpga' (Public General Act), 'uksi' (Statutory Instrument).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, and non-destructive behavior, so the description adds limited value. It mentions the return format ('title, description, enactment date, and available formats'), which provides useful context beyond annotations, but does not cover aspects like error handling, rate limits, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by parameter guidance and return details. It uses two efficient sentences with no redundant information, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 required parameters, no output schema), the description is mostly complete: it states the purpose, parameters, and return values. However, it lacks explicit guidance on sibling tool differentiation and does not mention potential errors or edge cases, leaving minor gaps for an agent to infer usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter well-documented in the schema (e.g., 'legislation_type' enum values explained). The description adds minimal semantics by listing the parameters ('legislation type, year, and number') and giving examples, but does not provide additional meaning beyond what the schema already covers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get metadata'), resource ('UK Act or Statutory Instrument'), and scope ('specific'), distinguishing it from sibling tools like 'legislation-search-8e9' (search) and 'legislation-get-contents-8e9' (get contents). It uses precise terminology like 'metadata' and lists return fields.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by specifying that it's for 'a specific UK Act or Statutory Instrument' and lists required parameters, implying usage when these identifiers are known. However, it does not explicitly state when to use this tool versus alternatives like 'legislation-search-8e9' for broader queries or 'legislation-get-section-8e9' for detailed content.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

legislation-get-contents-8e9A
Read-onlyIdempotent
Inspect

Get the table of contents for a UK Act or Statutory Instrument. Returns the hierarchical structure of parts, chapters, and sections with their titles and section numbers.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearYesYear of enactment (e.g. 2010).
numberYesLegislation number (e.g. 15).
legislation_typeYesType of legislation, e.g. 'ukpga' (Public General Act), 'uksi' (Statutory Instrument).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating safe, repeatable operations. The description adds valuable context about what the tool returns (hierarchical structure with titles and numbers), which goes beyond annotations. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: the first states the purpose and scope, the second details the return value. Every word contributes meaning with zero waste, and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with good annotations and full schema coverage, the description is mostly complete. It explains what the tool does and what it returns. However, without an output schema, it could benefit from more detail on return format (e.g., JSON structure), but the hierarchical description is sufficient for basic understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for all three parameters. The description doesn't add any parameter-specific information beyond what's in the schema, but it does reinforce that parameters identify specific legislation. Baseline 3 is appropriate since the schema fully documents parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the table of contents'), resource ('for a UK Act or Statutory Instrument'), and output ('hierarchical structure of parts, chapters, and sections with their titles and section numbers'). It distinguishes from siblings by focusing on table of contents rather than full acts, specific sections, or search functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying it's for UK legislation and returns hierarchical structure, suggesting it's for navigation rather than content retrieval. However, it doesn't explicitly state when to use this tool versus alternatives like legislation-get-act-8e9 or legislation-get-section-8e9, though the focus on table of contents provides some differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

legislation-get-section-8e9A
Read-onlyIdempotent
Inspect

Get the text of a specific section of UK legislation. Returns the section content as readable text with subsection numbering. Includes amendment notes and commencement information.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearYesYear of enactment (e.g. 2010).
numberYesLegislation number (e.g. 15).
sectionYesSection identifier (e.g. '1', '149', '1A'). For Statutory Instruments, use 'regulation/3' format.
legislation_typeYesType of legislation, e.g. 'ukpga' (Public General Act), 'uksi' (Statutory Instrument).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond annotations by specifying the return format ('readable text with subsection numbering') and included details ('amendment notes and commencement information'), which helps the agent understand the output behavior. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences that are front-loaded with the core purpose and efficiently detail the return content. Every sentence adds value—the first states the action and resource, the second specifies output details—with no wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 required parameters) and rich annotations (covering safety and idempotency), the description is mostly complete. It clarifies the output format, which compensates for the lack of an output schema. However, it could be more complete by explicitly differentiating from sibling tools or mentioning any limitations (e.g., availability of sections).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters well-documented in the schema (e.g., 'legislation_type' with enum values, 'section' with format examples). The description doesn't add any parameter-specific details beyond what the schema provides, so it meets the baseline for high schema coverage without compensating further.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the text'), resource ('a specific section of UK legislation'), and output format ('readable text with subsection numbering, amendment notes, commencement information'). It distinguishes from sibling tools like 'legislation-get-act-8e9' (likely for entire acts) and 'legislation-search-8e9' (likely for searching) by focusing on retrieving a single section's text.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you need the text of a specific section, but it doesn't explicitly state when to use this tool versus alternatives like 'legislation-get-contents-8e9' (likely for table of contents) or 'legislation-search-8e9'. No exclusions or prerequisites are mentioned, leaving some ambiguity about the optimal context for this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

legislation-search-8e9A
Read-onlyIdempotent
Inspect

Search UK legislation (Acts of Parliament, Statutory Instruments, etc.) by title keyword. Returns matching legislation with title, summary, year, number, and links.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number for pagination (1-based, default 1).
yearNoFilter by year of enactment (e.g. 2010).
titleYesSearch term to match against legislation titles.
results_countNoNumber of results to return (1-20, default 5).
legislation_typeNoType of legislation to search. Common values: 'ukpga' (Public General Act), 'uksi' (Statutory Instrument), 'asp' (Scottish Act), 'asc' (Welsh Act). Leave empty to search all types.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond annotations by specifying the return format ('title, summary, year, number, and links') and search scope ('UK legislation'), which helps the agent understand what to expect without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by return details in the second. Both sentences are necessary and efficient, with no redundant information, making it appropriately sized and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with filtering), rich annotations, and 100% schema coverage, the description is mostly complete. It lacks an output schema, but describes return values. However, it could improve by mentioning pagination or result limits implied by parameters, though annotations cover key behavioral traits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds minimal semantic value beyond the schema by mentioning 'title keyword' search, but doesn't provide additional details like search behavior (e.g., partial matches) or parameter interactions. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search UK legislation'), resource ('Acts of Parliament, Statutory Instruments'), and mechanism ('by title keyword'), distinguishing it from sibling tools like 'legislation-get-act-8e9' which likely retrieve specific legislation rather than searching. It explicitly mentions what types of legislation are included.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching by title keyword, but doesn't explicitly state when to use this tool versus the sibling tools (e.g., 'legislation-get-act-8e9' for retrieving a specific act). It provides some context by mentioning what it returns, but lacks explicit guidance on alternatives or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.