Skip to main content
Glama

govuk-mcp

Ownership verified

Server Details

MCP server for GOV.UK — search, content retrieval, organisation lookup, and postcode resolution.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 6 of 6 tools scored. Lowest: 3.4/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes: content search (govuk_grep_content), organisation listing (govuk_list_organisations), postcode lookup (govuk_lookup_postcode), general search (govuk_search), resource listing (list_resources), and resource reading (read_resource). However, govuk_grep_content and govuk_search both search GOV.UK content, which could cause confusion about when to use each, though their descriptions clarify different use cases (section-level vs. full-content search).

Naming Consistency3/5

The naming is mixed: four tools use a 'govuk_' prefix with descriptive names (e.g., govuk_grep_content), while two use a simpler verb_noun pattern without the prefix (list_resources, read_resource). This inconsistency breaks a clear pattern, though all names are readable and follow snake_case. The deviation makes the set feel less cohesive.

Tool Count5/5

With 6 tools, the count is well-scoped for a GOV.UK-focused server. It covers key areas like content access, organisation data, geographic lookup, and resource management without being overwhelming. Each tool serves a clear purpose, and the number aligns with typical server sizes (3-15 tools), avoiding bloat or thinness.

Completeness4/5

The tool set covers core GOV.UK interactions: searching and reading content, accessing organisation data, and handling postcode lookups. Minor gaps exist, such as no explicit tools for updating or deleting resources (if applicable) or more advanced content manipulation, but the provided tools allow agents to perform essential queries and navigation without major dead ends.

Available Tools

6 tools
govuk_grep_contentSearch within a GOV.UK content bodyA
Read-onlyIdempotent
Inspect

Find body sections in a GOV.UK content item matching a pattern.

Returns a list of {anchor, heading, snippet, match} hits — small per-section snippets centred on the match — so the LLM can decide which full sections to read via govuk://content/{base_path}/section/{anchor}.

Use this when answering content-based questions ("what does this guide say about X?", "find the bit about eligibility") rather than navigating by section number (which uses the index resource).

Pattern is regex; if it doesn't compile, falls back to literal substring.

ParametersJSON Schema
NameRequiredDescriptionDefault
paramsYesInput schema for govuk_grep_content.

Output Schema

ParametersJSON Schema
NameRequiredDescription
hitsYesMatching sections in document order
patternYesThe pattern that was searched for
base_pathYesThe content item that was searched
truncatedYesTrue if hit count reached max_hits and more matches may exist
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains the return format ('list of {anchor, heading, snippet, match} hits'), how the LLM should use results ('can decide which full sections to read via govuk://content/{base_path}/section/{anchor}'), and fallback behavior ('if it doesn't compile, falls back to literal substring'). Annotations cover read-only, open-world, and idempotent aspects, but the description enriches this with practical usage details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three focused paragraphs: purpose, return format/usage, and pattern behavior. Every sentence adds value without redundancy, and key information is front-loaded. It's appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of annotations (readOnlyHint, openWorldHint, idempotentHint), 100% schema coverage, and an output schema (implied by 'Has output schema: true'), the description provides excellent contextual completeness. It explains the tool's role in a workflow, return format usage, and behavioral nuances, making it fully adequate for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description mentions 'pattern is regex' and fallback behavior, which slightly reinforces schema information but doesn't add significant new semantic meaning. The baseline of 3 is appropriate when the schema does most of the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Find body sections', 'search within a GOV.UK content item') and resources ('content body', 'content item'). It explicitly distinguishes from sibling tools by contrasting with 'navigating by section number (which uses the index resource)' and mentions 'govuk_search to discover base_paths' for context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Use this when answering content-based questions...') and when not to ('rather than navigating by section number'). It also mentions an alternative tool ('govuk_search to discover base_paths') for related functionality, giving clear context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

govuk_list_organisationsList GOV.UK OrganisationsA
Read-onlyIdempotent
Inspect

List all UK government organisations registered on GOV.UK.

Returns a paginated list of organisations including their slug, acronym, type, and status. Use this to browse the full government structure or discover slugs for use with govuk_get_organisation or govuk_search filters.

ParametersJSON Schema
NameRequiredDescriptionDefault
paramsYesOrganisationsListInput with 1-based page and per_page (1–50).

Output Schema

ParametersJSON Schema
NameRequiredDescription
pageYes1-based page number requested.
totalNoTotal number of organisations across all pages, if reported by GOV.UK.
has_moreYesTrue if more organisations exist beyond this page. Re-call with page=page+1 to fetch the next page.
per_pageYesMax organisations requested per page.
returnedYesNumber of organisations returned in this response.
total_pagesNoTotal number of pages available, if reported by GOV.UK.
organisationsNoOrganisations on this page, in the order returned by GOV.UK.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond annotations by specifying that it returns a 'paginated list' and detailing the included fields (slug, acronym, type, status), which helps the agent understand the return format and behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by usage guidance. Both sentences earn their place by providing essential information without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple list operation), rich annotations (covering safety and behavior), 100% schema coverage, and the presence of an output schema (implied by context signals), the description is complete. It effectively supplements structured data by clarifying the tool's role and usage context without needing to explain return values or parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with parameters 'page' and 'per_page' fully documented in the schema (including defaults, ranges, and descriptions). The description does not add any parameter-specific information beyond what the schema provides, so it meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List all UK government organisations registered on GOV.UK') and resource ('organisations'), distinguishing it from siblings like govuk_search or govuk_grep_content by focusing on browsing the full government structure rather than searching/filtering content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided: 'Use this to browse the full government structure or discover slugs for use with govuk_get_organisation or govuk_search filters.' This clearly indicates when to use this tool (for browsing/discovery) versus alternatives like govuk_search (for filtering) or govuk_get_organisation (for detailed info using slugs).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

govuk_lookup_postcodeLook Up UK PostcodeA
Read-onlyIdempotent
Inspect

Look up a UK postcode to retrieve its local authority, region, constituency, and other administrative geography.

Useful for determining which council area, parliamentary constituency, or NHS region a postcode falls within. Commonly used to direct users to the correct local service on GOV.UK (e.g. council tax, planning, waste).

Uses the postcodes.io public API (no key required).

ParametersJSON Schema
NameRequiredDescriptionDefault
paramsYesPostcodeInput with a UK postcode (e.g. 'NG1 1AA', 'SW1A 2AA').

Output Schema

ParametersJSON Schema
NameRequiredDescription
codesNoGSS codes for all administrative geographies covering this postcode.
regionNoONS region, e.g. 'East Midlands'.
countryNoCountry, e.g. 'England', 'Scotland', 'Wales', 'Northern Ireland'.
latitudeNoLatitude in decimal degrees (WGS84).
postcodeNoCanonicalised postcode as returned by postcodes.io.
longitudeNoLongitude in decimal degrees (WGS84).
admin_countyNoAdministrative county, where applicable (null in unitary areas).
local_authorityNoLocal authority / council covering the postcode.
nhs_integrated_care_boardNoNHS Integrated Care Board, where available.
parliamentary_constituencyNoParliamentary constituency (pre-2025 boundary).
parliamentary_constituency_2025NoParliamentary constituency under the 2025 boundaries.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already provide excellent behavioral coverage (read-only, open-world, idempotent, non-destructive). The description adds valuable context about the underlying API ('Uses the postcodes.io public API') and authentication requirements ('no key required'), which goes beyond what annotations provide. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured with three focused sentences: purpose statement, usage context, and implementation details. Every sentence adds value with zero redundancy. It's appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simple single-parameter design, comprehensive annotations, and the presence of an output schema, the description provides complete context. It covers purpose, usage scenarios, and implementation details without needing to explain return values or complex behaviors.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the single parameter. The description doesn't add any additional parameter semantics beyond what's in the schema. The baseline score of 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('look up') and resource ('UK postcode'), and explicitly lists the information retrieved ('local authority, region, constituency, and other administrative geography'). It distinguishes from sibling tools by focusing on postcode lookup rather than content search or organization listing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('determining which council area, parliamentary constituency, or NHS region a postcode falls within' and 'direct users to the correct local service'). However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_resourcesB
Read-only
Inspect

List all available resources and resource templates.

Returns JSON with resource metadata. Static resources have a 'uri' field, while templates have a 'uri_template' field with placeholders like {name}.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds context beyond annotations by specifying the return format (JSON with metadata, distinguishing static resources with 'uri' from templates with 'uri_template'), which is useful for understanding output structure. Annotations already declare readOnlyHint=true, so the agent knows it's safe. However, it doesn't disclose other behavioral traits like rate limits, auth needs, or pagination, which could be relevant for a listing tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured, with two sentences that efficiently convey purpose and output details. The first sentence states the action and target, and the second explains the return format. There's no wasted text, and it's front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, read-only, with output schema), the description is reasonably complete. It explains what the tool does and the structure of the return value, which complements the output schema. However, it lacks usage guidelines compared to siblings, which slightly reduces completeness for an agent needing to choose between tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the schema fully documents the lack of inputs. The description doesn't need to add parameter details, and it appropriately focuses on output semantics instead. This meets the baseline for no parameters, as it doesn't mislead or omit necessary input information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and the target 'all available resources and resource templates', which is specific and unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'govuk_list_organisations' or 'read_resource', which might have overlapping functionality in listing resources. The purpose is clear but lacks sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when this tool is appropriate compared to siblings like 'govuk_list_organisations' (which might list specific resources) or 'read_resource' (which might retrieve individual resources). There's no context on prerequisites, exclusions, or alternatives, leaving usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

read_resourceA
Read-only
Inspect

Read a resource by its URI.

For static resources, provide the exact URI. For templated resources, provide the URI with template parameters filled in.

Returns the resource content as a string. Binary content is base64-encoded.

ParametersJSON Schema
NameRequiredDescriptionDefault
uriYesThe URI of the resource to read

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true, indicating a safe read operation. The description adds valuable behavioral context beyond this: it specifies how to handle static vs. templated URIs and discloses that binary content is base64-encoded in the return. This enhances transparency without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by specific usage notes and return behavior. Every sentence adds value without redundancy, making it efficiently structured and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, read-only), the description is complete. It covers purpose, usage guidance, behavioral details (e.g., base64 encoding), and the existence of an output schema means return values need not be explained. No gaps are evident for this context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'uri' parameter fully documented. The description adds some semantic context by explaining the difference between static and templated URIs, but it does not provide additional syntax or format details beyond what the schema implies. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Read') and resource ('a resource by its URI'), making the purpose specific and unambiguous. It distinguishes itself from siblings like 'list_resources' (which lists resources) and 'govuk_search' (which searches content) by focusing on retrieving a single resource via its URI.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool: for reading resources by URI, with guidance on static vs. templated URIs. However, it does not explicitly state when not to use it or name alternatives (e.g., 'list_resources' for browsing), which prevents a score of 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources