Skip to main content
Glama
Ownership verified

Server Details

MCP server for GOV.UK — search, content retrieval, organisation lookup, and postcode resolution.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct resource or action: content metadata (get_content), section body (get_section), content search (grep_content), full search (search), organisation profile (get_organisation), organisation listing (list_organisations), and postcode lookup (lookup_postcode). The descriptions explicitly cross-reference each other, reducing ambiguity.

Naming Consistency5/5

All tools follow a consistent 'govuk_verb_noun' pattern (e.g., govuk_get_content, govuk_list_organisations). Names are lowercase with underscores, clearly indicating action and domain, making them predictable and easy to remember.

Tool Count5/5

With 7 tools, the scope is well-balanced: essential content access (retrieval, search), organisation data, and a geolocation feature. No tool feels redundant or missing, and the count is within the ideal 3-15 range.

Completeness5/5

The tool set covers the primary needs for a GOV.UK assistant: reading page content and sections, searching content and within pages, exploring organisations, and looking up postcodes. There are no obvious gaps for read-only access, and each tool enables the next step in a typical workflow.

Available Tools

7 tools
govuk_get_contentGet GOV.UK PageA
Read-onlyIdempotent
Inspect

Get metadata and navigable section index for a GOV.UK page.

Returns the page title, document type, publication dates, and a list of sections with their anchor IDs and headings. Use govuk_get_section to read the body of a specific section, or govuk_grep_content to search within the page body.

ParametersJSON Schema
NameRequiredDescriptionDefault
base_pathYesGOV.UK base_path, e.g. '/universal-credit' or 'universal-credit'

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readonly and idempotent. Description adds return structure and implies limited scope (only index, not full body), providing useful context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states purpose, second lists return fields and alternatives. No wasted words, well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With output schema present and strong annotations, this description fully clarifies the tool's role, return data, and relationship to siblings. No missing elements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter description. Description does not add new info about base_path, but context about output is indirectly helpful. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get metadata and navigable section index' and lists return fields (title, dates, sections). Distinguishes from siblings govuk_get_section and govuk_grep_content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly instructs when to use this tool versus alternatives: 'Use govuk_get_section to read the body... or govuk_grep_content to search...'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

govuk_get_organisationGet GOV.UK OrganisationA
Read-onlyIdempotent
Inspect

Get the profile of a UK government organisation by its slug.

Returns name, acronym, type, status, web URL, and parent/child organisations. Use govuk_list_organisations to browse all organisations and discover slugs.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesOrganisation slug, e.g. 'hm-revenue-customs'. Find slugs via govuk_list_organisations.

Output Schema

ParametersJSON Schema
NameRequiredDescription
slugNoOrganisation slug, e.g. 'hm-revenue-customs'. Usable with govuk_search filters.
typeNoOrganisation type, e.g. 'ministerial_department', 'executive_agency', 'non_ministerial_department', 'public_corporation'.
stateNoGOV.UK status, e.g. 'live', 'closed', 'transitioning'.
titleNoFull organisation title.
acronymNoOrganisation acronym, if set.
web_urlNoAbsolute https://www.gov.uk URL for the organisation page.
contact_detailsNoContact details block from GOV.UK (phone, email, address) when available.
child_organisationsNoTitles of child organisations / agencies under this body.
parent_organisationsNoTitles of parent organisations this body reports into.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false, so the safety profile is covered. The description adds what the return includes (name, acronym, type, status, web URL, parent/child), which is useful context beyond annotations, though output schema likely covers it. No extra behavioral traits are disclosed, but no contradiction exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two short, precise sentences with no redundant information. It is front-loaded with the core action and resource, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (single parameter, output schema present), the description provides all necessary information: what the tool does, how to identify the organisation (slug), and what to expect in the return. It is complete without being verbose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description reiterates the slug parameter with an example and guidance to find slugs via the list tool, adding minor context but not significantly beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'profile of a UK government organisation' with the specific identifier 'slug'. It also mentions returning name, acronym, type, status, web URL, and parent/child organisations, and points to a sibling tool for discovering slugs, differentiating it effectively.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: use this tool to get an organisation by slug, and use govuk_list_organisations to browse all and discover slugs. This clearly indicates when to use this tool versus the listing alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

govuk_get_sectionGet GOV.UK Page SectionA
Read-onlyIdempotent
Inspect

Get the HTML content of one named section of a GOV.UK page.

Use govuk_get_content first to get the list of available section anchors, then call this with the anchor of the section you want to read.

ParametersJSON Schema
NameRequiredDescriptionDefault
anchorYesSection anchor ID from govuk_get_content sections list
base_pathYesGOV.UK base_path, e.g. '/universal-credit'

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds that it returns HTML content, which is sufficient. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. The first sentence states the verb and resource, the second provides workflow guidance. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has an output schema, so return values are documented elsewhere. The description covers purpose, usage, and parameter semantics. Error handling is not mentioned, but the tool's simplicity makes this acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%. The description reinforces the relationship between the anchor parameter and the output of govuk_get_content, adding value beyond the base schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the HTML content of a named section of a GOV.UK page. It differentiates from siblings by specifying the prerequisite step of using govuk_get_content to obtain section anchors.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit instructions are given to first use govuk_get_content to get section anchors, then call this tool with the anchor. This provides clear contextual guidance, though alternatives and exclusions are not mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

govuk_grep_contentSearch within a GOV.UK content bodyA
Read-onlyIdempotent
Inspect

Find body sections in a GOV.UK content item matching a pattern.

Returns a list of {anchor, heading, snippet, match} hits — small per-section snippets centred on the match — so the LLM can decide which full sections to read via govuk_get_section.

Use this when answering content-based questions ("what does this guide say about X?", "find the bit about eligibility") rather than navigating by section number.

Pattern is regex; if it doesn't compile, falls back to literal substring.

ParametersJSON Schema
NameRequiredDescriptionDefault
patternYesRegex or literal substring to search for within the page body, e.g. 'payment' or 'eligible.*income'
max_hitsNoMaximum number of matching sections to return (1–100)
base_pathYesGOV.UK base_path, e.g. '/guidance/register-for-vat' or '/universal-credit'
case_insensitiveNoIf true (default), match case-insensitively

Output Schema

ParametersJSON Schema
NameRequiredDescription
hitsYesMatching sections in document order
patternYesThe pattern that was searched for
base_pathYesThe content item that was searched
truncatedYesTrue if hit count reached max_hits and more matches may exist
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains the return format ('list of {anchor, heading, snippet, match} hits'), how the LLM should use results ('can decide which full sections to read via govuk://content/{base_path}/section/{anchor}'), and fallback behavior ('if it doesn't compile, falls back to literal substring'). Annotations cover read-only, open-world, and idempotent aspects, but the description enriches this with practical usage details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three focused paragraphs: purpose, return format/usage, and pattern behavior. Every sentence adds value without redundancy, and key information is front-loaded. It's appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of annotations (readOnlyHint, openWorldHint, idempotentHint), 100% schema coverage, and an output schema (implied by 'Has output schema: true'), the description provides excellent contextual completeness. It explains the tool's role in a workflow, return format usage, and behavioral nuances, making it fully adequate for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description mentions 'pattern is regex' and fallback behavior, which slightly reinforces schema information but doesn't add significant new semantic meaning. The baseline of 3 is appropriate when the schema does most of the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Find body sections', 'search within a GOV.UK content item') and resources ('content body', 'content item'). It explicitly distinguishes from sibling tools by contrasting with 'navigating by section number (which uses the index resource)' and mentions 'govuk_search to discover base_paths' for context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Use this when answering content-based questions...') and when not to ('rather than navigating by section number'). It also mentions an alternative tool ('govuk_search to discover base_paths') for related functionality, giving clear context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

govuk_list_organisationsList GOV.UK OrganisationsA
Read-onlyIdempotent
Inspect

List all UK government organisations registered on GOV.UK.

Returns a paginated list of organisations including their slug, acronym, type, and status. Use this to browse the full government structure or discover slugs for use with govuk_get_organisation or govuk_search filters.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (1-based)
per_pageNoResults per page (1–50)

Output Schema

ParametersJSON Schema
NameRequiredDescription
pageYes1-based page number requested.
totalNoTotal number of organisations across all pages, if reported by GOV.UK.
has_moreYesTrue if more organisations exist beyond this page. Re-call with page=page+1 to fetch the next page.
per_pageYesMax organisations requested per page.
returnedYesNumber of organisations returned in this response.
total_pagesNoTotal number of pages available, if reported by GOV.UK.
organisationsNoOrganisations on this page, in the order returned by GOV.UK.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond annotations by specifying that it returns a 'paginated list' and detailing the included fields (slug, acronym, type, status), which helps the agent understand the return format and behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by usage guidance. Both sentences earn their place by providing essential information without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple list operation), rich annotations (covering safety and behavior), 100% schema coverage, and the presence of an output schema (implied by context signals), the description is complete. It effectively supplements structured data by clarifying the tool's role and usage context without needing to explain return values or parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with parameters 'page' and 'per_page' fully documented in the schema (including defaults, ranges, and descriptions). The description does not add any parameter-specific information beyond what the schema provides, so it meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List all UK government organisations registered on GOV.UK') and resource ('organisations'), distinguishing it from siblings like govuk_search or govuk_grep_content by focusing on browsing the full government structure rather than searching/filtering content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided: 'Use this to browse the full government structure or discover slugs for use with govuk_get_organisation or govuk_search filters.' This clearly indicates when to use this tool (for browsing/discovery) versus alternatives like govuk_search (for filtering) or govuk_get_organisation (for detailed info using slugs).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

govuk_lookup_postcodeLook Up UK PostcodeA
Read-onlyIdempotent
Inspect

Look up a UK postcode to retrieve its local authority, region, constituency, and other administrative geography.

Useful for determining which council area, parliamentary constituency, or NHS region a postcode falls within. Commonly used to direct users to the correct local service on GOV.UK (e.g. council tax, planning, waste).

Uses the postcodes.io public API (no key required).

ParametersJSON Schema
NameRequiredDescriptionDefault
postcodeYesUK postcode, e.g. 'SW1A 2AA' or 'NG1 1AA'. Spaces optional.

Output Schema

ParametersJSON Schema
NameRequiredDescription
codesNoGSS codes for all administrative geographies covering this postcode.
regionNoONS region, e.g. 'East Midlands'.
countryNoCountry, e.g. 'England', 'Scotland', 'Wales', 'Northern Ireland'.
latitudeNoLatitude in decimal degrees (WGS84).
postcodeNoCanonicalised postcode as returned by postcodes.io.
longitudeNoLongitude in decimal degrees (WGS84).
admin_countyNoAdministrative county, where applicable (null in unitary areas).
local_authorityNoLocal authority / council covering the postcode.
nhs_integrated_care_boardNoNHS Integrated Care Board, where available.
parliamentary_constituencyNoParliamentary constituency (pre-2025 boundary).
parliamentary_constituency_2025NoParliamentary constituency under the 2025 boundaries.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already provide excellent behavioral coverage (read-only, open-world, idempotent, non-destructive). The description adds valuable context about the underlying API ('Uses the postcodes.io public API') and authentication requirements ('no key required'), which goes beyond what annotations provide. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured with three focused sentences: purpose statement, usage context, and implementation details. Every sentence adds value with zero redundancy. It's appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simple single-parameter design, comprehensive annotations, and the presence of an output schema, the description provides complete context. It covers purpose, usage scenarios, and implementation details without needing to explain return values or complex behaviors.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the single parameter. The description doesn't add any additional parameter semantics beyond what's in the schema. The baseline score of 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('look up') and resource ('UK postcode'), and explicitly lists the information retrieved ('local authority, region, constituency, and other administrative geography'). It distinguishes from sibling tools by focusing on postcode lookup rather than content search or organization listing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('determining which council area, parliamentary constituency, or NHS region a postcode falls within' and 'direct users to the correct local service'). However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources