govuk-mcp
Server Details
MCP server for GOV.UK — search, content retrieval, organisation lookup, and postcode resolution.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 6 of 6 tools scored. Lowest: 3.4/5.
Most tools have distinct purposes: content search (govuk_grep_content), organisation listing (govuk_list_organisations), postcode lookup (govuk_lookup_postcode), general search (govuk_search), resource listing (list_resources), and resource reading (read_resource). However, govuk_grep_content and govuk_search both search GOV.UK content, which could cause confusion about when to use each, though their descriptions clarify different use cases (section-level vs. full-content search).
The naming is mixed: four tools use a 'govuk_' prefix with descriptive names (e.g., govuk_grep_content), while two use a simpler verb_noun pattern without the prefix (list_resources, read_resource). This inconsistency breaks a clear pattern, though all names are readable and follow snake_case. The deviation makes the set feel less cohesive.
With 6 tools, the count is well-scoped for a GOV.UK-focused server. It covers key areas like content access, organisation data, geographic lookup, and resource management without being overwhelming. Each tool serves a clear purpose, and the number aligns with typical server sizes (3-15 tools), avoiding bloat or thinness.
The tool set covers core GOV.UK interactions: searching and reading content, accessing organisation data, and handling postcode lookups. Minor gaps exist, such as no explicit tools for updating or deleting resources (if applicable) or more advanced content manipulation, but the provided tools allow agents to perform essential queries and navigation without major dead ends.
Available Tools
6 toolsgovuk_grep_contentSearch within a GOV.UK content bodyARead-onlyIdempotentInspect
Find body sections in a GOV.UK content item matching a pattern.
Returns a list of {anchor, heading, snippet, match} hits — small per-section
snippets centred on the match — so the LLM can decide which full sections to
read via govuk://content/{base_path}/section/{anchor}.
Use this when answering content-based questions ("what does this guide say about X?", "find the bit about eligibility") rather than navigating by section number (which uses the index resource).
Pattern is regex; if it doesn't compile, falls back to literal substring.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes | Input schema for govuk_grep_content. |
Output Schema
| Name | Required | Description |
|---|---|---|
| hits | Yes | Matching sections in document order |
| pattern | Yes | The pattern that was searched for |
| base_path | Yes | The content item that was searched |
| truncated | Yes | True if hit count reached max_hits and more matches may exist |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explains the return format ('list of {anchor, heading, snippet, match} hits'), how the LLM should use results ('can decide which full sections to read via govuk://content/{base_path}/section/{anchor}'), and fallback behavior ('if it doesn't compile, falls back to literal substring'). Annotations cover read-only, open-world, and idempotent aspects, but the description enriches this with practical usage details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with three focused paragraphs: purpose, return format/usage, and pattern behavior. Every sentence adds value without redundancy, and key information is front-loaded. It's appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of annotations (readOnlyHint, openWorldHint, idempotentHint), 100% schema coverage, and an output schema (implied by 'Has output schema: true'), the description provides excellent contextual completeness. It explains the tool's role in a workflow, return format usage, and behavioral nuances, making it fully adequate for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description mentions 'pattern is regex' and fallback behavior, which slightly reinforces schema information but doesn't add significant new semantic meaning. The baseline of 3 is appropriate when the schema does most of the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Find body sections', 'search within a GOV.UK content item') and resources ('content body', 'content item'). It explicitly distinguishes from sibling tools by contrasting with 'navigating by section number (which uses the index resource)' and mentions 'govuk_search to discover base_paths' for context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Use this when answering content-based questions...') and when not to ('rather than navigating by section number'). It also mentions an alternative tool ('govuk_search to discover base_paths') for related functionality, giving clear context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
govuk_list_organisationsList GOV.UK OrganisationsARead-onlyIdempotentInspect
List all UK government organisations registered on GOV.UK.
Returns a paginated list of organisations including their slug, acronym, type, and status. Use this to browse the full government structure or discover slugs for use with govuk_get_organisation or govuk_search filters.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes | OrganisationsListInput with 1-based page and per_page (1–50). |
Output Schema
| Name | Required | Description |
|---|---|---|
| page | Yes | 1-based page number requested. |
| total | No | Total number of organisations across all pages, if reported by GOV.UK. |
| has_more | Yes | True if more organisations exist beyond this page. Re-call with page=page+1 to fetch the next page. |
| per_page | Yes | Max organisations requested per page. |
| returned | Yes | Number of organisations returned in this response. |
| total_pages | No | Total number of pages available, if reported by GOV.UK. |
| organisations | No | Organisations on this page, in the order returned by GOV.UK. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond annotations by specifying that it returns a 'paginated list' and detailing the included fields (slug, acronym, type, status), which helps the agent understand the return format and behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by usage guidance. Both sentences earn their place by providing essential information without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (simple list operation), rich annotations (covering safety and behavior), 100% schema coverage, and the presence of an output schema (implied by context signals), the description is complete. It effectively supplements structured data by clarifying the tool's role and usage context without needing to explain return values or parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with parameters 'page' and 'per_page' fully documented in the schema (including defaults, ranges, and descriptions). The description does not add any parameter-specific information beyond what the schema provides, so it meets the baseline of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all UK government organisations registered on GOV.UK') and resource ('organisations'), distinguishing it from siblings like govuk_search or govuk_grep_content by focusing on browsing the full government structure rather than searching/filtering content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided: 'Use this to browse the full government structure or discover slugs for use with govuk_get_organisation or govuk_search filters.' This clearly indicates when to use this tool (for browsing/discovery) versus alternatives like govuk_search (for filtering) or govuk_get_organisation (for detailed info using slugs).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
govuk_lookup_postcodeLook Up UK PostcodeARead-onlyIdempotentInspect
Look up a UK postcode to retrieve its local authority, region, constituency, and other administrative geography.
Useful for determining which council area, parliamentary constituency, or NHS region a postcode falls within. Commonly used to direct users to the correct local service on GOV.UK (e.g. council tax, planning, waste).
Uses the postcodes.io public API (no key required).
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes | PostcodeInput with a UK postcode (e.g. 'NG1 1AA', 'SW1A 2AA'). |
Output Schema
| Name | Required | Description |
|---|---|---|
| codes | No | GSS codes for all administrative geographies covering this postcode. |
| region | No | ONS region, e.g. 'East Midlands'. |
| country | No | Country, e.g. 'England', 'Scotland', 'Wales', 'Northern Ireland'. |
| latitude | No | Latitude in decimal degrees (WGS84). |
| postcode | No | Canonicalised postcode as returned by postcodes.io. |
| longitude | No | Longitude in decimal degrees (WGS84). |
| admin_county | No | Administrative county, where applicable (null in unitary areas). |
| local_authority | No | Local authority / council covering the postcode. |
| nhs_integrated_care_board | No | NHS Integrated Care Board, where available. |
| parliamentary_constituency | No | Parliamentary constituency (pre-2025 boundary). |
| parliamentary_constituency_2025 | No | Parliamentary constituency under the 2025 boundaries. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already provide excellent behavioral coverage (read-only, open-world, idempotent, non-destructive). The description adds valuable context about the underlying API ('Uses the postcodes.io public API') and authentication requirements ('no key required'), which goes beyond what annotations provide. No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with three focused sentences: purpose statement, usage context, and implementation details. Every sentence adds value with zero redundancy. It's appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simple single-parameter design, comprehensive annotations, and the presence of an output schema, the description provides complete context. It covers purpose, usage scenarios, and implementation details without needing to explain return values or complex behaviors.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents the single parameter. The description doesn't add any additional parameter semantics beyond what's in the schema. The baseline score of 3 is appropriate when the schema does all the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('look up') and resource ('UK postcode'), and explicitly lists the information retrieved ('local authority, region, constituency, and other administrative geography'). It distinguishes from sibling tools by focusing on postcode lookup rather than content search or organization listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('determining which council area, parliamentary constituency, or NHS region a postcode falls within' and 'direct users to the correct local service'). However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
govuk_searchSearch GOV.UKARead-onlyIdempotentInspect
Search GOV.UK's 700k+ content items using the official Search API.
Returns a list of matching content items with title, description, link, format, owning organisation(s), and last updated timestamp.
Use filter_format to narrow to specific content types (e.g. 'transaction' for citizen-facing services, 'guide' for guidance, 'publication' for official documents). Use filter_organisations to restrict to a department.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes | SearchInput with query, count, start, optional format/org filters, and optional sort order. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | Max results requested for this page. |
| query | Yes | The free-text query that was searched. |
| start | Yes | Offset used for this page (zero-based). |
| total | Yes | Total matching results across all pages on GOV.UK. |
| results | No | Matching pages. Use the `link` field of any result as the `base_path` input to govuk_get_content for the full item. |
| has_more | Yes | True if more results exist beyond this page. Re-call with start=start+returned to fetch the next page. |
| returned | Yes | Number of results actually returned in this response. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context about the scale ('700k+ content items'), return format details, and practical filtering examples, enhancing understanding beyond annotations without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with three sentences: first states purpose and scale, second details return format, third provides usage guidance for filters. Every sentence adds value with no wasted words, and key information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (readOnlyHint, openWorldHint, idempotentHint), comprehensive schema with 100% coverage, and presence of an output schema, the description is complete. It covers purpose, scale, return format, and filtering guidance without needing to explain parameters or return values already documented elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with each parameter well-documented in the schema. The description adds minimal extra semantics by mentioning filter_format and filter_organisations with examples, but doesn't provide significant additional meaning beyond what the schema already covers, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches GOV.UK's 700k+ content items using the official Search API, specifying the verb 'search' and resource 'GOV.UK content items'. It distinguishes from siblings like govuk_list_organisations (list only) and govuk_lookup_postcode (specific lookup) by emphasizing comprehensive search capabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use filter_format and filter_organisations parameters to narrow results, giving examples like 'transaction' for citizen-facing services. However, it doesn't explicitly state when to use this tool versus alternatives like govuk_grep_content or list_resources, missing explicit sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_resourcesBRead-onlyInspect
List all available resources and resource templates.
Returns JSON with resource metadata. Static resources have a 'uri' field, while templates have a 'uri_template' field with placeholders like {name}.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds context beyond annotations by specifying the return format (JSON with metadata, distinguishing static resources with 'uri' from templates with 'uri_template'), which is useful for understanding output structure. Annotations already declare readOnlyHint=true, so the agent knows it's safe. However, it doesn't disclose other behavioral traits like rate limits, auth needs, or pagination, which could be relevant for a listing tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured, with two sentences that efficiently convey purpose and output details. The first sentence states the action and target, and the second explains the return format. There's no wasted text, and it's front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, read-only, with output schema), the description is reasonably complete. It explains what the tool does and the structure of the return value, which complements the output schema. However, it lacks usage guidelines compared to siblings, which slightly reduces completeness for an agent needing to choose between tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the schema fully documents the lack of inputs. The description doesn't need to add parameter details, and it appropriately focuses on output semantics instead. This meets the baseline for no parameters, as it doesn't mislead or omit necessary input information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the target 'all available resources and resource templates', which is specific and unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'govuk_list_organisations' or 'read_resource', which might have overlapping functionality in listing resources. The purpose is clear but lacks sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when this tool is appropriate compared to siblings like 'govuk_list_organisations' (which might list specific resources) or 'read_resource' (which might retrieve individual resources). There's no context on prerequisites, exclusions, or alternatives, leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
read_resourceARead-onlyInspect
Read a resource by its URI.
For static resources, provide the exact URI. For templated resources, provide the URI with template parameters filled in.
Returns the resource content as a string. Binary content is base64-encoded.
| Name | Required | Description | Default |
|---|---|---|---|
| uri | Yes | The URI of the resource to read |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare readOnlyHint=true, indicating a safe read operation. The description adds valuable behavioral context beyond this: it specifies how to handle static vs. templated URIs and discloses that binary content is base64-encoded in the return. This enhances transparency without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by specific usage notes and return behavior. Every sentence adds value without redundancy, making it efficiently structured and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, read-only), the description is complete. It covers purpose, usage guidance, behavioral details (e.g., base64 encoding), and the existence of an output schema means return values need not be explained. No gaps are evident for this context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'uri' parameter fully documented. The description adds some semantic context by explaining the difference between static and templated URIs, but it does not provide additional syntax or format details beyond what the schema implies. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Read') and resource ('a resource by its URI'), making the purpose specific and unambiguous. It distinguishes itself from siblings like 'list_resources' (which lists resources) and 'govuk_search' (which searches content) by focusing on retrieving a single resource via its URI.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool: for reading resources by URI, with guidance on static vs. templated URIs. However, it does not explicitly state when not to use it or name alternatives (e.g., 'list_resources' for browsing), which prevents a score of 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!