GOV.UK
Server Details
MCP server for GOV.UK — search, content retrieval, organisation lookup, and postcode resolution.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 7 of 7 tools scored.
Each tool targets a distinct resource or action: content metadata (get_content), section body (get_section), content search (grep_content), full search (search), organisation profile (get_organisation), organisation listing (list_organisations), and postcode lookup (lookup_postcode). The descriptions explicitly cross-reference each other, reducing ambiguity.
All tools follow a consistent 'govuk_verb_noun' pattern (e.g., govuk_get_content, govuk_list_organisations). Names are lowercase with underscores, clearly indicating action and domain, making them predictable and easy to remember.
With 7 tools, the scope is well-balanced: essential content access (retrieval, search), organisation data, and a geolocation feature. No tool feels redundant or missing, and the count is within the ideal 3-15 range.
The tool set covers the primary needs for a GOV.UK assistant: reading page content and sections, searching content and within pages, exploring organisations, and looking up postcodes. There are no obvious gaps for read-only access, and each tool enables the next step in a typical workflow.
Available Tools
7 toolsgovuk_get_contentGet GOV.UK PageARead-onlyIdempotentInspect
Get metadata and navigable section index for a GOV.UK page.
Returns the page title, document type, publication dates, and a list of sections with their anchor IDs and headings. Use govuk_get_section to read the body of a specific section, or govuk_grep_content to search within the page body.
| Name | Required | Description | Default |
|---|---|---|---|
| base_path | Yes | GOV.UK base_path, e.g. '/universal-credit' or 'universal-credit' |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readonly and idempotent. Description adds return structure and implies limited scope (only index, not full body), providing useful context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states purpose, second lists return fields and alternatives. No wasted words, well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With output schema present and strong annotations, this description fully clarifies the tool's role, return data, and relationship to siblings. No missing elements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter description. Description does not add new info about base_path, but context about output is indirectly helpful. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Get metadata and navigable section index' and lists return fields (title, dates, sections). Distinguishes from siblings govuk_get_section and govuk_grep_content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs when to use this tool versus alternatives: 'Use govuk_get_section to read the body... or govuk_grep_content to search...'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
govuk_get_organisationGet GOV.UK OrganisationARead-onlyIdempotentInspect
Get the profile of a UK government organisation by its slug.
Returns name, acronym, type, status, web URL, and parent/child organisations. Use govuk_list_organisations to browse all organisations and discover slugs.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Organisation slug, e.g. 'hm-revenue-customs'. Find slugs via govuk_list_organisations. |
Output Schema
| Name | Required | Description |
|---|---|---|
| slug | No | Organisation slug, e.g. 'hm-revenue-customs'. Usable with govuk_search filters. |
| type | No | Organisation type, e.g. 'ministerial_department', 'executive_agency', 'non_ministerial_department', 'public_corporation'. |
| state | No | GOV.UK status, e.g. 'live', 'closed', 'transitioning'. |
| title | No | Full organisation title. |
| acronym | No | Organisation acronym, if set. |
| web_url | No | Absolute https://www.gov.uk URL for the organisation page. |
| contact_details | No | Contact details block from GOV.UK (phone, email, address) when available. |
| child_organisations | No | Titles of child organisations / agencies under this body. |
| parent_organisations | No | Titles of parent organisations this body reports into. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false, so the safety profile is covered. The description adds what the return includes (name, acronym, type, status, web URL, parent/child), which is useful context beyond annotations, though output schema likely covers it. No extra behavioral traits are disclosed, but no contradiction exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two short, precise sentences with no redundant information. It is front-loaded with the core action and resource, making it easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (single parameter, output schema present), the description provides all necessary information: what the tool does, how to identify the organisation (slug), and what to expect in the return. It is complete without being verbose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description reiterates the slug parameter with an example and guidance to find slugs via the list tool, adding minor context but not significantly beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'profile of a UK government organisation' with the specific identifier 'slug'. It also mentions returning name, acronym, type, status, web URL, and parent/child organisations, and points to a sibling tool for discovering slugs, differentiating it effectively.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance: use this tool to get an organisation by slug, and use govuk_list_organisations to browse all and discover slugs. This clearly indicates when to use this tool versus the listing alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
govuk_get_sectionGet GOV.UK Page SectionARead-onlyIdempotentInspect
Get the HTML content of one named section of a GOV.UK page.
Use govuk_get_content first to get the list of available section anchors, then call this with the anchor of the section you want to read.
| Name | Required | Description | Default |
|---|---|---|---|
| anchor | Yes | Section anchor ID from govuk_get_content sections list | |
| base_path | Yes | GOV.UK base_path, e.g. '/universal-credit' |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds that it returns HTML content, which is sufficient. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. The first sentence states the verb and resource, the second provides workflow guidance. Front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has an output schema, so return values are documented elsewhere. The description covers purpose, usage, and parameter semantics. Error handling is not mentioned, but the tool's simplicity makes this acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%. The description reinforces the relationship between the anchor parameter and the output of govuk_get_content, adding value beyond the base schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves the HTML content of a named section of a GOV.UK page. It differentiates from siblings by specifying the prerequisite step of using govuk_get_content to obtain section anchors.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit instructions are given to first use govuk_get_content to get section anchors, then call this tool with the anchor. This provides clear contextual guidance, though alternatives and exclusions are not mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
govuk_grep_contentSearch within a GOV.UK content bodyARead-onlyIdempotentInspect
Find body sections in a GOV.UK content item matching a pattern.
Returns a list of {anchor, heading, snippet, match} hits — small per-section
snippets centred on the match — so the LLM can decide which full sections to
read via govuk_get_section.
Use this when answering content-based questions ("what does this guide say about X?", "find the bit about eligibility") rather than navigating by section number.
Pattern is regex; if it doesn't compile, falls back to literal substring.
| Name | Required | Description | Default |
|---|---|---|---|
| pattern | Yes | Regex or literal substring to search for within the page body, e.g. 'payment' or 'eligible.*income' | |
| max_hits | No | Maximum number of matching sections to return (1–100) | |
| base_path | Yes | GOV.UK base_path, e.g. '/guidance/register-for-vat' or '/universal-credit' | |
| case_insensitive | No | If true (default), match case-insensitively |
Output Schema
| Name | Required | Description |
|---|---|---|
| hits | Yes | Matching sections in document order |
| pattern | Yes | The pattern that was searched for |
| base_path | Yes | The content item that was searched |
| truncated | Yes | True if hit count reached max_hits and more matches may exist |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explains the return format ('list of {anchor, heading, snippet, match} hits'), how the LLM should use results ('can decide which full sections to read via govuk://content/{base_path}/section/{anchor}'), and fallback behavior ('if it doesn't compile, falls back to literal substring'). Annotations cover read-only, open-world, and idempotent aspects, but the description enriches this with practical usage details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with three focused paragraphs: purpose, return format/usage, and pattern behavior. Every sentence adds value without redundancy, and key information is front-loaded. It's appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of annotations (readOnlyHint, openWorldHint, idempotentHint), 100% schema coverage, and an output schema (implied by 'Has output schema: true'), the description provides excellent contextual completeness. It explains the tool's role in a workflow, return format usage, and behavioral nuances, making it fully adequate for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description mentions 'pattern is regex' and fallback behavior, which slightly reinforces schema information but doesn't add significant new semantic meaning. The baseline of 3 is appropriate when the schema does most of the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Find body sections', 'search within a GOV.UK content item') and resources ('content body', 'content item'). It explicitly distinguishes from sibling tools by contrasting with 'navigating by section number (which uses the index resource)' and mentions 'govuk_search to discover base_paths' for context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Use this when answering content-based questions...') and when not to ('rather than navigating by section number'). It also mentions an alternative tool ('govuk_search to discover base_paths') for related functionality, giving clear context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
govuk_list_organisationsList GOV.UK OrganisationsARead-onlyIdempotentInspect
List all UK government organisations registered on GOV.UK.
Returns a paginated list of organisations including their slug, acronym, type, and status. Use this to browse the full government structure or discover slugs for use with govuk_get_organisation or govuk_search filters.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (1-based) | |
| per_page | No | Results per page (1–50) |
Output Schema
| Name | Required | Description |
|---|---|---|
| page | Yes | 1-based page number requested. |
| total | No | Total number of organisations across all pages, if reported by GOV.UK. |
| has_more | Yes | True if more organisations exist beyond this page. Re-call with page=page+1 to fetch the next page. |
| per_page | Yes | Max organisations requested per page. |
| returned | Yes | Number of organisations returned in this response. |
| total_pages | No | Total number of pages available, if reported by GOV.UK. |
| organisations | No | Organisations on this page, in the order returned by GOV.UK. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond annotations by specifying that it returns a 'paginated list' and detailing the included fields (slug, acronym, type, status), which helps the agent understand the return format and behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by usage guidance. Both sentences earn their place by providing essential information without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (simple list operation), rich annotations (covering safety and behavior), 100% schema coverage, and the presence of an output schema (implied by context signals), the description is complete. It effectively supplements structured data by clarifying the tool's role and usage context without needing to explain return values or parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with parameters 'page' and 'per_page' fully documented in the schema (including defaults, ranges, and descriptions). The description does not add any parameter-specific information beyond what the schema provides, so it meets the baseline of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all UK government organisations registered on GOV.UK') and resource ('organisations'), distinguishing it from siblings like govuk_search or govuk_grep_content by focusing on browsing the full government structure rather than searching/filtering content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided: 'Use this to browse the full government structure or discover slugs for use with govuk_get_organisation or govuk_search filters.' This clearly indicates when to use this tool (for browsing/discovery) versus alternatives like govuk_search (for filtering) or govuk_get_organisation (for detailed info using slugs).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
govuk_lookup_postcodeLook Up UK PostcodeARead-onlyIdempotentInspect
Look up a UK postcode to retrieve its local authority, region, constituency, and other administrative geography.
Useful for determining which council area, parliamentary constituency, or NHS region a postcode falls within. Commonly used to direct users to the correct local service on GOV.UK (e.g. council tax, planning, waste).
Uses the postcodes.io public API (no key required).
| Name | Required | Description | Default |
|---|---|---|---|
| postcode | Yes | UK postcode, e.g. 'SW1A 2AA' or 'NG1 1AA'. Spaces optional. |
Output Schema
| Name | Required | Description |
|---|---|---|
| codes | No | GSS codes for all administrative geographies covering this postcode. |
| region | No | ONS region, e.g. 'East Midlands'. |
| country | No | Country, e.g. 'England', 'Scotland', 'Wales', 'Northern Ireland'. |
| latitude | No | Latitude in decimal degrees (WGS84). |
| postcode | No | Canonicalised postcode as returned by postcodes.io. |
| longitude | No | Longitude in decimal degrees (WGS84). |
| admin_county | No | Administrative county, where applicable (null in unitary areas). |
| local_authority | No | Local authority / council covering the postcode. |
| nhs_integrated_care_board | No | NHS Integrated Care Board, where available. |
| parliamentary_constituency | No | Parliamentary constituency (pre-2025 boundary). |
| parliamentary_constituency_2025 | No | Parliamentary constituency under the 2025 boundaries. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already provide excellent behavioral coverage (read-only, open-world, idempotent, non-destructive). The description adds valuable context about the underlying API ('Uses the postcodes.io public API') and authentication requirements ('no key required'), which goes beyond what annotations provide. No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with three focused sentences: purpose statement, usage context, and implementation details. Every sentence adds value with zero redundancy. It's appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simple single-parameter design, comprehensive annotations, and the presence of an output schema, the description provides complete context. It covers purpose, usage scenarios, and implementation details without needing to explain return values or complex behaviors.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents the single parameter. The description doesn't add any additional parameter semantics beyond what's in the schema. The baseline score of 3 is appropriate when the schema does all the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('look up') and resource ('UK postcode'), and explicitly lists the information retrieved ('local authority, region, constituency, and other administrative geography'). It distinguishes from sibling tools by focusing on postcode lookup rather than content search or organization listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('determining which council area, parliamentary constituency, or NHS region a postcode falls within' and 'direct users to the correct local service'). However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
govuk_searchSearch GOV.UKARead-onlyIdempotentInspect
Search GOV.UK's 700k+ content items using the official Search API.
Returns a list of matching content items with title, description, link, format, owning organisation(s), and last updated timestamp.
Use filter_format to narrow to specific content types (e.g. 'transaction' for citizen-facing services, 'guide' for guidance, 'publication' for official documents). Use filter_organisations to restrict to a department.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of results to return (1–50) | |
| order | No | Sort order. Use '-public_timestamp' for newest-first (default relevance). | |
| query | Yes | Free-text search query, e.g. 'universal credit eligibility' or 'MOT check' | |
| start | No | Offset for pagination, e.g. 10 for the second page of 10 results | |
| filter_format | No | Filter by document format. Common values: 'guide', 'answer', 'transaction', 'publication', 'news_article', 'detailed_guide', 'hmrc_manual_section', 'travel_advice', 'organisation'. Leave blank to search all types. | |
| filter_organisations | No | Filter by organisation slug, e.g. 'hm-revenue-customs', 'department-for-work-pensions', 'driver-and-vehicle-standards-agency'. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | Max results requested for this page. |
| query | Yes | The free-text query that was searched. |
| start | Yes | Offset used for this page (zero-based). |
| total | Yes | Total matching results across all pages on GOV.UK. |
| results | No | Matching pages. Use the `link` field of any result as the `base_path` input to govuk_get_content for the full item. |
| has_more | Yes | True if more results exist beyond this page. Re-call with start=start+returned to fetch the next page. |
| returned | Yes | Number of results actually returned in this response. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context about the scale ('700k+ content items'), return format details, and practical filtering examples, enhancing understanding beyond annotations without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with three sentences: first states purpose and scale, second details return format, third provides usage guidance for filters. Every sentence adds value with no wasted words, and key information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (readOnlyHint, openWorldHint, idempotentHint), comprehensive schema with 100% coverage, and presence of an output schema, the description is complete. It covers purpose, scale, return format, and filtering guidance without needing to explain parameters or return values already documented elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with each parameter well-documented in the schema. The description adds minimal extra semantics by mentioning filter_format and filter_organisations with examples, but doesn't provide significant additional meaning beyond what the schema already covers, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches GOV.UK's 700k+ content items using the official Search API, specifying the verb 'search' and resource 'GOV.UK content items'. It distinguishes from siblings like govuk_list_organisations (list only) and govuk_lookup_postcode (specific lookup) by emphasizing comprehensive search capabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use filter_format and filter_organisations parameters to narrow results, giving examples like 'transaction' for citizen-facing services. However, it doesn't explicitly state when to use this tool versus alternatives like govuk_grep_content or list_resources, missing explicit sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!