Skip to main content
Glama

Server Details

MCP server for UK Freedom of Information research. Connects AI assistants to WhatDoTheyKnow — the UK's largest FOI request platform — to search requests, read responses, look up public authorities, and draft new requests.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 8 of 8 tools scored. Lowest: 2.4/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: building URLs, creating records, searching, getting feed items, updating state, and prompt management. No two tools overlap in intent.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern in snake_case (e.g., build_request_url, search_authorities, update_request_state). No mixing of conventions.

Tool Count4/5

8 tools cover the core actions for FOI request management: search, build, create, get feed, update state, plus prompt utilities. Slightly above average but well-scoped for the domain.

Completeness3/5

Missing direct retrieval of a single request or authority by slug (relying on external wdtk:// protocol), and no delete/list operations. Core lifecycle has notable gaps.

Available Tools

8 tools
build_request_urlC
Read-onlyIdempotent
Inspect

Build a prefilled WhatDoTheyKnow request URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
bodyNo
tagsNo
titleNo
authority_slugYes
default_letterNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlYes
authority_slugYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a read-only, idempotent call, so the description does not need to restate those. However, it adds no behavioral context beyond 'build a URL'—it does not mention that no request is sent, that parameters prefill the URL, or what the output format is. The description fails to leverage the opportunity to clarify behavior beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, which is concise but under-specified for a tool with 5 parameters and no schema descriptions. While it front-loads the core action, it could benefit from a second sentence elaborating on the tool's behavior without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 parameters, no parameter descriptions, and an output schema (implied by context), the description is incomplete. It does not explain what happens when parameters are provided, any constraints on authority_slug, or the nature of the return value. The output schema may compensate, but the description alone is insufficient for an agent to understand the tool's full usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, meaning no parameter descriptions exist in the schema. The description provides no explanation of how parameters like body, tags, title, authority_slug, and default_letter influence the built URL. This leaves the agent without essential semantic information for correct parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool builds a prefilled WhatDoTheyKnow request URL, clearly specifying the action and resource. It is distinguishable from sibling tools like create_request_record and get_prompt, but could be more explicit about the tool's exact output (e.g., 'returns a URL string').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives or when it should not be used. The description lacks context such as prerequisites or scenarios where building a URL is appropriate, leaving the agent to infer usage from the tool name and schema.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_request_recordC
Destructive
Inspect

Create a request through the experimental write API.

Requires WDTK_API_KEY in the server environment.

ParametersJSON Schema
NameRequiredDescriptionDefault
bodyYes
titleYes
external_urlYes
external_user_nameYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds that the tool is experimental and requires an API key, which provides some behavioral context beyond the annotations. Annotations already indicate destructiveness, but the description does not elaborate on side effects, success/failure responses, or data integrity implications. This is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise with two sentences, no redundancy. The first sentence states the purpose, the second adds a critical requirement. It is front-loaded with the primary action. However, it could be split for better structure, but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (4 required params, experimental API, destructive hint, no param descriptions), the description is too brief. It does not mention the output schema exists, nor does it provide examples or error handling. The description fails to fully prepare an agent for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% parameter coverage with 0% description coverage. The description does not explain the purpose or expected format of any parameter, leaving the agent to infer from names like 'external_user_name' and 'external_url'. This is insufficient for a 4-parameter tool with no schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a request via an experimental write API. The verb 'Create' and resource 'request' are specific. It distinguishes from siblings like 'build_request_url' and 'search_authorities', which serve different purposes. However, it could be more precise about the type of request (e.g., FOI request) to enhance differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions a prerequisite (WDTK_API_KEY) but provides no guidance on when to use this tool versus alternatives. It does not specify when not to use it, nor does it reference sibling tools for comparison. The experimental nature is noted, but usage context beyond that is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_promptAInspect

Get a prompt by name with optional arguments.

Returns the rendered prompt as JSON with a messages array. Arguments should be provided as a dict mapping argument names to values.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesThe name of the prompt to get
argumentsNoOptional arguments for the prompt

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description is the sole source of behavioral info. It mentions returning JSON with a messages array, but lacks details on error handling, auth requirements, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise with 4 sentences, front-loaded with purpose. Slightly verbose in explaining arguments, but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with output schema, the description is fairly complete. It covers the return format and argument structure, though lacks error state documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline 3. The description adds value by noting arguments should be a dict, enhancing understanding beyond the schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a prompt by name, with optional arguments. It distinguishes from sibling 'list_prompts' by specifying retrieval of a single prompt by name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for fetching a single prompt, but provides no explicit guidance on when to use this tool versus alternatives like list_prompts, nor does it mention when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_request_feed_itemsA
Read-onlyIdempotent
Inspect

Return parsed Atom feed entries for a specific FOI request as structured objects.

Use this instead of reading the raw wdtk://requests/{slug}/feed resource when you want structured AtomEntry objects rather than raw XML. Each entry's link field contains the request URL; use the slug from that URL with request_json or authority_json for full detail.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
request_slugYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly and idempotent. The description adds context about the structured nature of return objects and the link field usage, which goes beyond annotation cues.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, front-loaded with the primary purpose, and no extraneous information. Each sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose and usage well, and output schema handles return structure, but fails to document parameters which is a gap given zero schema descriptions. Not fully complete for a 2-parameter tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema coverage, the description does not explain the parameters (request_slug, limit). It only mentions the tool uses a request slug but does not describe its format or meaning, leaving agents without necessary parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns parsed Atom feed entries for a specific FOI request as structured objects. It distinguishes itself from sibling tools like create_request_record or search_authorities by focusing on feed item retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises using this tool over reading raw feed resource when structured objects are desired, and provides guidance on using the returned link field for further detail with other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_promptsAInspect

List all available prompts.

Returns JSON with prompt metadata including name, description, and optional arguments.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It discloses that it returns JSON with metadata, but does not mention any potential side effects, rate limits, or pagination behavior. Since it's a simple read-only list, this is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences clearly state the tool's purpose and return format. No unnecessary words; front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and the presence of an output schema, the description is mostly complete. It covers the main functionality and return type, though it could benefit from mentioning potential error cases or empty list scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so the description does not need to add parameter semantics. With 100% schema coverage and no parameters, a baseline of 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all available prompts, using a specific verb and resource. It distinguishes itself from sibling tool 'get_prompt' by implying it lists all rather than a single prompt.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While no explicit when-to-use or when-not-to-use statements are provided, the purpose is straightforward: it lists all prompts. Sibling 'get_prompt' suggests an alternative for single prompts, but no exclusions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_authoritiesA
Read-onlyIdempotent
Inspect

Search WhatDoTheyKnow public authorities by name.

Returns up to limit authorities whose name or short_name contains query (case-insensitive). Use the slug field with authority_json or build_request_url as the next step.

Example: search_authorities("Liverpool") → slug "liverpool_city_council" Then: authority_json with that slug, or build_request_url with it.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Consistent with annotations (readOnlyHint, idempotentHint). Adds details on search behavior (case-insensitive, name/short_name containing query) and pagination via `limit`. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences: purpose, behavior with params, and usage example. No wasted words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given existence of output schema and sibling tools, the description covers everything needed: what it does, how to use params, and next steps. Complete for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining both `query` (name/short_name contains, case-insensitive) and `limit` (maximum number returned). Includes an example.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb 'Search' and resource 'public authorities by name'. Includes scope (name or short_name contains query) and distinguishes from sibling tools like build_request_url and create_request_record.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Specifies that it returns up to `limit` authorities matching `query`, case-insensitive. Advises using the `slug` field with authority_json or build_request_url. Could explicitly mention when not to use, but overall clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_request_eventsA
Read-onlyIdempotent
Inspect

Search WhatDoTheyKnow's feed-based event index and return structured results.

Call this to find FOI requests matching a query expression. Returns up to limit AtomEntry objects. Use the link field of each result as the next navigation step — extract the request slug and call the wdtk://requests/{slug} resource or get_request_feed_items for full detail.

Example expressions: status:successful body:"Liverpool City Council" (variety:sent OR variety:response) status:successful

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
search_expressionYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, indicating safe, non-destructive behavior. The description adds behavioral specifics: returning up to `limit` AtomEntry objects, the query syntax with examples, and using the `link` field for navigation. It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence. It then provides usage guidance and examples in a compact format without unnecessary words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (so return fields are documented there) and the input schema has only two parameters, the description covers query syntax, limit, and navigation steps. It provides enough context for an AI agent to use the tool effectively, including a complete set of example expressions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero description coverage for both parameters. The description compensates by providing example expressions for search_expression (e.g., 'status:successful') and explaining that limit controls the number of returned entries ('Returns up to `limit` AtomEntry objects'). This adds crucial semantic meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches 'WhatDoTheyKnow's feed-based event index' and returns structured results. It uses a specific verb (search) and resource (feed-based event index), and the context of finding FOI requests differentiates it from sibling tools like get_request_feed_items.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Call this to find FOI requests matching a query expression' and gives example expressions. It also provides an alternative: 'or get_request_feed_items for full detail' after extracting the slug, guiding when to use each tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_request_stateC
Destructive
Inspect

Update the user-assessed state of a request through the experimental write API.

Requires WDTK_API_KEY in the server environment.

ParametersJSON Schema
NameRequiredDescriptionDefault
stateYes
request_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare destructiveHint=true, so the description's 'Update' confirms mutation. The description adds valuable context: 'experimental write API' warns of potential instability, and 'Requires WDTK_API_KEY' specifies a prerequisite. However, it does not discuss failure behavior, rate limits, or side effects beyond what annotations imply.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (two sentences) and front-loads the core purpose. However, it could benefit from a brief note on state values or a use example. Still, every sentence earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has two required parameters and an output schema (which may document return values), but the description omits critical context: what are valid state transitions? Is the update idempotent? Are there any side effects beyond state change? The mention of 'experimental' partially compensates, but overall completeness is low.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has two parameters (request_id, state) with zero description coverage. The description does not clarify the meaning of 'state' (e.g., allowed values, format) or 'request_id' (e.g., where to find it). Since coverage is 0%, the description must compensate but fails entirely, leaving agents guessing.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Update') and the resource ('the user-assessed state of a request'). It distinguishes the tool from siblings like 'create_request_record' (which creates) and 'get_request_feed_items' (which reads). However, the term 'user-assessed state' remains somewhat vague, and the description does not specify what states are possible, slightly reducing clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance on when to use this tool. It mentions 'experimental write API' implying caution, but does not explicitly state when to prefer this tool over alternatives (e.g., create_request_record for new requests, search_* for reading). No when-not-to-use or context triggering usage is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources