What Do They Know?
Server Details
MCP server for UK Freedom of Information research. Connects AI assistants to WhatDoTheyKnow — the UK's largest FOI request platform — to search requests, read responses, look up public authorities, and draft new requests.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 8 of 8 tools scored. Lowest: 2.4/5.
Each tool has a clearly distinct purpose: building URLs, creating records, searching, getting feed items, updating state, and prompt management. No two tools overlap in intent.
All tool names follow a consistent verb_noun pattern in snake_case (e.g., build_request_url, search_authorities, update_request_state). No mixing of conventions.
8 tools cover the core actions for FOI request management: search, build, create, get feed, update state, plus prompt utilities. Slightly above average but well-scoped for the domain.
Missing direct retrieval of a single request or authority by slug (relying on external wdtk:// protocol), and no delete/list operations. Core lifecycle has notable gaps.
Available Tools
8 toolsbuild_request_urlCRead-onlyIdempotentInspect
Build a prefilled WhatDoTheyKnow request URL.
| Name | Required | Description | Default |
|---|---|---|---|
| body | No | ||
| tags | No | ||
| title | No | ||
| authority_slug | Yes | ||
| default_letter | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| url | Yes | |
| authority_slug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a read-only, idempotent call, so the description does not need to restate those. However, it adds no behavioral context beyond 'build a URL'—it does not mention that no request is sent, that parameters prefill the URL, or what the output format is. The description fails to leverage the opportunity to clarify behavior beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, which is concise but under-specified for a tool with 5 parameters and no schema descriptions. While it front-loads the core action, it could benefit from a second sentence elaborating on the tool's behavior without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 5 parameters, no parameter descriptions, and an output schema (implied by context), the description is incomplete. It does not explain what happens when parameters are provided, any constraints on authority_slug, or the nature of the return value. The output schema may compensate, but the description alone is insufficient for an agent to understand the tool's full usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, meaning no parameter descriptions exist in the schema. The description provides no explanation of how parameters like body, tags, title, authority_slug, and default_letter influence the built URL. This leaves the agent without essential semantic information for correct parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool builds a prefilled WhatDoTheyKnow request URL, clearly specifying the action and resource. It is distinguishable from sibling tools like create_request_record and get_prompt, but could be more explicit about the tool's exact output (e.g., 'returns a URL string').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives or when it should not be used. The description lacks context such as prerequisites or scenarios where building a URL is appropriate, leaving the agent to infer usage from the tool name and schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_request_recordCDestructiveInspect
Create a request through the experimental write API.
Requires WDTK_API_KEY in the server environment.
| Name | Required | Description | Default |
|---|---|---|---|
| body | Yes | ||
| title | Yes | ||
| external_url | Yes | ||
| external_user_name | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds that the tool is experimental and requires an API key, which provides some behavioral context beyond the annotations. Annotations already indicate destructiveness, but the description does not elaborate on side effects, success/failure responses, or data integrity implications. This is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise with two sentences, no redundancy. The first sentence states the purpose, the second adds a critical requirement. It is front-loaded with the primary action. However, it could be split for better structure, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (4 required params, experimental API, destructive hint, no param descriptions), the description is too brief. It does not mention the output schema exists, nor does it provide examples or error handling. The description fails to fully prepare an agent for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% parameter coverage with 0% description coverage. The description does not explain the purpose or expected format of any parameter, leaving the agent to infer from names like 'external_user_name' and 'external_url'. This is insufficient for a 4-parameter tool with no schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a request via an experimental write API. The verb 'Create' and resource 'request' are specific. It distinguishes from siblings like 'build_request_url' and 'search_authorities', which serve different purposes. However, it could be more precise about the type of request (e.g., FOI request) to enhance differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions a prerequisite (WDTK_API_KEY) but provides no guidance on when to use this tool versus alternatives. It does not specify when not to use it, nor does it reference sibling tools for comparison. The experimental nature is noted, but usage context beyond that is missing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_promptAInspect
Get a prompt by name with optional arguments.
Returns the rendered prompt as JSON with a messages array. Arguments should be provided as a dict mapping argument names to values.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | The name of the prompt to get | |
| arguments | No | Optional arguments for the prompt |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description is the sole source of behavioral info. It mentions returning JSON with a messages array, but lacks details on error handling, auth requirements, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise with 4 sentences, front-loaded with purpose. Slightly verbose in explaining arguments, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with output schema, the description is fairly complete. It covers the return format and argument structure, though lacks error state documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline 3. The description adds value by noting arguments should be a dict, enhancing understanding beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a prompt by name, with optional arguments. It distinguishes from sibling 'list_prompts' by specifying retrieval of a single prompt by name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for fetching a single prompt, but provides no explicit guidance on when to use this tool versus alternatives like list_prompts, nor does it mention when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_request_feed_itemsARead-onlyIdempotentInspect
Return parsed Atom feed entries for a specific FOI request as structured objects.
Use this instead of reading the raw wdtk://requests/{slug}/feed resource when you
want structured AtomEntry objects rather than raw XML. Each entry's link field
contains the request URL; use the slug from that URL with request_json or
authority_json for full detail.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| request_slug | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly and idempotent. The description adds context about the structured nature of return objects and the link field usage, which goes beyond annotation cues.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, front-loaded with the primary purpose, and no extraneous information. Each sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose and usage well, and output schema handles return structure, but fails to document parameters which is a gap given zero schema descriptions. Not fully complete for a 2-parameter tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema coverage, the description does not explain the parameters (request_slug, limit). It only mentions the tool uses a request slug but does not describe its format or meaning, leaving agents without necessary parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns parsed Atom feed entries for a specific FOI request as structured objects. It distinguishes itself from sibling tools like create_request_record or search_authorities by focusing on feed item retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises using this tool over reading raw feed resource when structured objects are desired, and provides guidance on using the returned link field for further detail with other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_promptsAInspect
List all available prompts.
Returns JSON with prompt metadata including name, description, and optional arguments.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It discloses that it returns JSON with metadata, but does not mention any potential side effects, rate limits, or pagination behavior. Since it's a simple read-only list, this is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences clearly state the tool's purpose and return format. No unnecessary words; front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and the presence of an output schema, the description is mostly complete. It covers the main functionality and return type, though it could benefit from mentioning potential error cases or empty list scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description does not need to add parameter semantics. With 100% schema coverage and no parameters, a baseline of 4 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all available prompts, using a specific verb and resource. It distinguishes itself from sibling tool 'get_prompt' by implying it lists all rather than a single prompt.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While no explicit when-to-use or when-not-to-use statements are provided, the purpose is straightforward: it lists all prompts. Sibling 'get_prompt' suggests an alternative for single prompts, but no exclusions are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_authoritiesARead-onlyIdempotentInspect
Search WhatDoTheyKnow public authorities by name.
Returns up to limit authorities whose name or short_name contains query
(case-insensitive). Use the slug field with authority_json or
build_request_url as the next step.
Example: search_authorities("Liverpool") → slug "liverpool_city_council" Then: authority_json with that slug, or build_request_url with it.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Consistent with annotations (readOnlyHint, idempotentHint). Adds details on search behavior (case-insensitive, name/short_name containing query) and pagination via `limit`. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences: purpose, behavior with params, and usage example. No wasted words, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given existence of output schema and sibling tools, the description covers everything needed: what it does, how to use params, and next steps. Complete for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by explaining both `query` (name/short_name contains, case-insensitive) and `limit` (maximum number returned). Includes an example.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'Search' and resource 'public authorities by name'. Includes scope (name or short_name contains query) and distinguishes from sibling tools like build_request_url and create_request_record.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Specifies that it returns up to `limit` authorities matching `query`, case-insensitive. Advises using the `slug` field with authority_json or build_request_url. Could explicitly mention when not to use, but overall clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_request_eventsARead-onlyIdempotentInspect
Search WhatDoTheyKnow's feed-based event index and return structured results.
Call this to find FOI requests matching a query expression. Returns up to limit
AtomEntry objects. Use the link field of each result as the next navigation
step — extract the request slug and call the wdtk://requests/{slug} resource or
get_request_feed_items for full detail.
Example expressions: status:successful body:"Liverpool City Council" (variety:sent OR variety:response) status:successful
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| search_expression | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, indicating safe, non-destructive behavior. The description adds behavioral specifics: returning up to `limit` AtomEntry objects, the query syntax with examples, and using the `link` field for navigation. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence. It then provides usage guidance and examples in a compact format without unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (so return fields are documented there) and the input schema has only two parameters, the description covers query syntax, limit, and navigation steps. It provides enough context for an AI agent to use the tool effectively, including a complete set of example expressions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero description coverage for both parameters. The description compensates by providing example expressions for search_expression (e.g., 'status:successful') and explaining that limit controls the number of returned entries ('Returns up to `limit` AtomEntry objects'). This adds crucial semantic meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches 'WhatDoTheyKnow's feed-based event index' and returns structured results. It uses a specific verb (search) and resource (feed-based event index), and the context of finding FOI requests differentiates it from sibling tools like get_request_feed_items.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Call this to find FOI requests matching a query expression' and gives example expressions. It also provides an alternative: 'or get_request_feed_items for full detail' after extracting the slug, guiding when to use each tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_request_stateCDestructiveInspect
Update the user-assessed state of a request through the experimental write API.
Requires WDTK_API_KEY in the server environment.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | ||
| request_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare destructiveHint=true, so the description's 'Update' confirms mutation. The description adds valuable context: 'experimental write API' warns of potential instability, and 'Requires WDTK_API_KEY' specifies a prerequisite. However, it does not discuss failure behavior, rate limits, or side effects beyond what annotations imply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise (two sentences) and front-loads the core purpose. However, it could benefit from a brief note on state values or a use example. Still, every sentence earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has two required parameters and an output schema (which may document return values), but the description omits critical context: what are valid state transitions? Is the update idempotent? Are there any side effects beyond state change? The mention of 'experimental' partially compensates, but overall completeness is low.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has two parameters (request_id, state) with zero description coverage. The description does not clarify the meaning of 'state' (e.g., allowed values, format) or 'request_id' (e.g., where to find it). Since coverage is 0%, the description must compensate but fails entirely, leaving agents guessing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and the resource ('the user-assessed state of a request'). It distinguishes the tool from siblings like 'create_request_record' (which creates) and 'get_request_feed_items' (which reads). However, the term 'user-assessed state' remains somewhat vague, and the description does not specify what states are possible, slightly reducing clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance on when to use this tool. It mentions 'experimental write API' implying caution, but does not explicitly state when to prefer this tool over alternatives (e.g., create_request_record for new requests, search_* for reading). No when-not-to-use or context triggering usage is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!