Skip to main content
Glama

UK Civic & Parliamentary Data MCP Server from MCPBundles

Ownership verified

Server Details

Query UK Parliament, elections, crime stats, ONS census data, and national archives

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
thinkchainai/mcpbundles
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 33 of 33 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools are well-differentiated by their specific data sources and purposes, such as bank holidays, elections, ONS statistics, parliamentary data, National Archives, and police data. However, there is some overlap between 'ons-get-dataset-5b5' and 'ons-get-dataset-6f5', which appear to serve similar functions with minor differences in returned metadata, potentially causing confusion. Additionally, 'ons-list-editions-5b5' and 'ons-list-editions-6f5' are similarly redundant, though other tools generally have clear boundaries.

Naming Consistency3/5

The naming follows a general pattern of 'source-action-resource' with hyphens, but there are inconsistencies. For example, 'ec-get-election-b3f' uses 'get' while 'ec-search-ballots-b3f' uses 'search', and some tools like 'nomis-query-dataset-af0' use 'query' instead of 'get' or 'search'. The suffix codes (e.g., '-ae1', '-b3f') add unnecessary variation, and a few tools deviate slightly in structure, making the pattern mixed but still mostly readable.

Tool Count3/5

With 33 tools, the count is high but justifiable given the broad scope covering multiple UK data domains (e.g., holidays, elections, statistics, archives, police). However, it borders on being heavy, as some redundancy (e.g., duplicate ONS dataset tools) could have been consolidated. For a server focused on civic and parliamentary data, the number is reasonable but could be optimized to reduce overlap and improve usability.

Completeness4/5

The tool set provides comprehensive coverage across its intended domains, offering search, get, and list operations for bank holidays, elections, ONS data, parliamentary info, archives, and police data. Minor gaps exist, such as no explicit update or delete tools (which may not be needed for read-only data) and some domains like police data lacking a direct 'create' tool (appropriate for public data). Overall, the surface supports most agent workflows without significant dead ends.

Available Tools

33 tools
bank-holidays-list-ae1A
Read-onlyIdempotent
Inspect

List UK bank holidays for a specific division (England and Wales, Scotland, or Northern Ireland). Optionally filter by year. Returns holiday title, date, notes, and bunting status.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearNoFilter to a specific year (e.g. 2025). Returns all years if omitted.
divisionYesUK division: 'england-and-wales', 'scotland', or 'northern-ireland'.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, which cover safety and idempotency. The description adds valuable behavioral context beyond annotations by specifying the return format ('Returns holiday title, date, notes, and bunting status') and clarifying the default behavior when year is omitted ('Returns all years if omitted').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence establishes the core purpose, and the second sentence provides important behavioral details about filtering and return values. No wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only, idempotent tool with complete schema documentation and no output schema, the description provides excellent context. It covers purpose, usage, return format, and default behavior. The only minor gap is not explicitly addressing the sibling tool relationship, but given the annotations and schema completeness, this is still highly effective.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description mentions the optional year filtering and division specificity, but doesn't add meaningful semantic information beyond what's in the schema descriptions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('UK bank holidays'), specifies the scope ('for a specific division'), and distinguishes from siblings by mentioning filtering capabilities. It goes beyond the title 'Listing bank holidays' to provide concrete details about what the tool does.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool (to get bank holidays for UK divisions with optional year filtering). However, it doesn't explicitly mention when NOT to use it or name specific alternatives among the sibling tools, though the sibling 'bank-holidays-next-ae1' appears to be a related alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bank-holidays-next-ae1A
Read-onlyIdempotent
Inspect

Get the next upcoming bank holiday(s) for a UK division (England and Wales, Scotland, or Northern Ireland). Returns the next 1 holiday by default, up to 10.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNoNumber of upcoming bank holidays to return (default 1, max 10).
divisionYesUK division: 'england-and-wales', 'scotland', or 'northern-ireland'.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds valuable behavioral context beyond annotations by specifying the return behavior ('Returns the next 1 holiday by default, up to 10'), which helps the agent understand the tool's output characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded in a single sentence that communicates all essential information without any wasted words. Every element serves a clear purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with good annotations and full schema coverage, the description provides adequate context. The lack of an output schema means the description's mention of return behavior is helpful, though more detail about the return format could improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters well-documented in the schema. The description doesn't add any parameter semantics beyond what the schema already provides, so it meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get'), resource ('next upcoming bank holiday(s)'), and scope ('for a UK division'). It distinguishes from sibling tools like 'bank-holidays-list-ae1' by focusing on upcoming holidays rather than listing all holidays.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to get upcoming bank holidays for UK divisions). It doesn't explicitly mention when not to use it or name alternatives, but the context is sufficiently clear given the tool's specific purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ec-get-election-b3fA
Read-onlyIdempotent
Inspect

Get detailed information about a specific UK election by its election ID. Returns election type, date, organisation, child elections, voting system, and whether voter ID is required.

ParametersJSON Schema
NameRequiredDescriptionDefault
election_idYesElection ID (e.g. 'local.2023-04-20', 'parl.2024-07-04', 'local.hackney.2022-05-05').
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context by specifying the return data structure (election type, date, organisation, etc.), which isn't covered by annotations and helps the agent understand what information will be retrieved.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the purpose and efficiently lists the return details. Every word earns its place with no redundancy or wasted space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with one parameter and no output schema, the description provides good context by detailing the return fields. However, it doesn't mention potential limitations like error handling or data freshness, leaving minor gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'election_id' fully documented in the schema. The description doesn't add any additional parameter semantics beyond what's in the schema, so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get detailed information'), resource ('a specific UK election'), and key identifier ('by its election ID'). It distinguishes from sibling tools like 'ec-search-elections-b3f' (searching multiple) and 'ec-get-parl-result-b3f' (getting results, not details).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you have a specific election ID and need detailed information, distinguishing it from search tools. However, it doesn't explicitly state when NOT to use it or name specific alternatives, though the sibling list shows clear search vs. get distinctions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ec-get-parl-result-b3fA
Read-onlyIdempotent
Inspect

Get detailed results for a specific UK parliamentary election in a constituency. Returns full candidate list with names, parties, vote counts, plus electorate size, turnout, and majority.

ParametersJSON Schema
NameRequiredDescriptionDefault
result_idYesParliament election result ID (e.g. '382387').
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating a safe, repeatable read operation. The description adds value by specifying the return content (candidate list, electorate size, turnout, majority) and clarifying it's for a single result, which enhances behavioral understanding beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose, scope, and return values without any redundant information. It is front-loaded with the main action and provides essential details concisely.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, 100% schema coverage, no output schema), the description is complete enough for a read-only operation. It specifies what data is returned, which compensates for the lack of an output schema, though it could briefly mention error handling or data format for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'result_id' parameter fully documented in the schema. The description does not add additional meaning or examples beyond what the schema provides, such as format details or usage context, so it meets the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get detailed results'), resource ('UK parliamentary election in a constituency'), and scope ('full candidate list with names, parties, vote counts, plus electorate size, turnout, and majority'). It distinguishes from sibling tools like 'ec-search-parl-results-b3f' by focusing on a single result rather than searching multiple results.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying it's for a 'specific UK parliamentary election in a constituency' and requires a 'result_id', but it does not explicitly state when to use this tool versus alternatives like 'ec-search-parl-results-b3f' or provide exclusions. The context is clear but lacks explicit guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ec-search-ballots-b3fA
Read-onlyIdempotent
Inspect

Search UK election ballots from the Democracy Club candidates database. Ballots represent a specific vote in a ward or constituency. Returns ballot paper IDs, candidates, wards, winner count, voting system, and results where available.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results (default 20).
election_idNoFilter ballots by election ID (e.g. 'local.2023-04-20').
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context about what data is returned (ballot paper IDs, candidates, wards, etc.) and clarifies that results are 'where available', which helps set expectations about partial data. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first establishes purpose and source, the second details what's returned. Every word adds value with zero redundancy, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only search tool with good annotations and full schema coverage, the description provides adequate context about the data domain and return fields. The lack of an output schema is partially compensated by listing return data types. However, it could mention pagination behavior or result ordering to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters ('limit' and 'election_id') well-documented in the schema. The description doesn't add any parameter-specific information beyond what's already in the schema, so it meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Search UK election ballots'), resource ('from the Democracy Club candidates database'), and scope ('ballots represent a specific vote in a ward or constituency'). It distinguishes from siblings like 'ec-search-elections-b3f' by focusing specifically on ballots rather than elections themselves.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning what ballots represent and what data is returned, but doesn't explicitly state when to use this tool versus alternatives like 'ec-search-elections-b3f' or 'ec-search-parl-results-b3f'. No explicit guidance on when-not-to-use or prerequisites is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ec-search-elections-b3fA
Read-onlyIdempotent
Inspect

Search UK elections from the Democracy Club database. Filter by election type (local, parliamentary, mayoral, etc.) and whether the election is current/upcoming. Returns election IDs, titles, dates, and organisation details.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results (default 20).
offsetNoNumber of results to skip for pagination (default 0).
currentNoIf true, only return current/upcoming elections.
election_typeNoFilter by election type: 'local', 'parl' (parliamentary), 'mayor', 'pcc' (police & crime commissioner), 'senedd', 'sp' (Scottish Parliament), 'gla' (Greater London Assembly), 'nia' (Northern Ireland Assembly).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable behavioral context beyond annotations by specifying the return format ('Returns election IDs, titles, dates, and organisation details') and clarifying the filtering logic for 'current/upcoming' elections.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states purpose and filtering capabilities, the second specifies return values. Every element serves a clear purpose with zero wasted words, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with filtering), rich annotations (read-only, idempotent), and 100% schema coverage, the description provides good contextual completeness. It explains what the tool returns, though without an output schema, some details about response structure remain unspecified. The main gap is lack of explicit sibling differentiation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters well-documented in the schema itself. The description mentions filtering by 'election type' and 'current/upcoming' status, which aligns with parameters in the schema but doesn't add significant semantic value beyond what's already provided in the structured fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search UK elections'), resource ('Democracy Club database'), and scope ('Filter by election type and current/upcoming status'). It distinguishes from sibling tools like 'ec-get-election-b3f' (single election retrieval) and 'ec-search-ballots-b3f' (different resource).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Search UK elections... Filter by election type and whether the election is current/upcoming'). However, it doesn't explicitly state when NOT to use it or name specific alternatives among siblings (e.g., 'ec-get-election-b3f' for single election retrieval).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ec-search-parl-results-b3fA
Read-onlyIdempotent
Inspect

Search UK parliamentary election results from the Parliament data API. Returns constituency results including electorate size, turnout, majority, and the overall result (e.g. 'Lab Hold', 'Con Gain from Lab'). Use the result ID with the get_parl_result tool for candidate vote counts.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (0-indexed, default 0).
page_sizeNoResults per page (default 10, max 50).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable behavioral context beyond annotations by specifying the return content (e.g., 'electorate size, turnout, majority, overall result') and hinting at pagination through the mention of result IDs, though it doesn't explicitly detail rate limits or authentication needs. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by usage guidance in the second. Both sentences are essential: the first defines the tool's function, and the second clarifies its relationship with another tool. There is no wasted text, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with pagination), rich annotations (read-only, idempotent, non-destructive), and no output schema, the description is largely complete. It covers what the tool does, return content, and usage context. However, it could improve by explicitly mentioning pagination behavior or result format details, but annotations help compensate for this gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters ('page' and 'page_size') fully documented in the input schema. The description does not add any parameter-specific information beyond what the schema provides, such as search filters or query terms, so it meets the baseline of 3 where the schema handles parameter documentation effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search UK parliamentary election results'), resource ('from the Parliament data API'), and scope ('constituency results including electorate size, turnout, majority, and overall result'). It distinguishes from sibling tools like 'ec-get-parl-result-b3f' by specifying that tool is for candidate vote counts, while this one returns constituency-level results.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides usage guidance by stating 'Use the result ID with the get_parl_result tool for candidate vote counts,' which tells the agent when to use this tool versus an alternative. It also implies this tool is for constituency-level results, not candidate details, providing clear context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

nomis-query-dataset-af0A
Read-onlyIdempotent
Inspect

Query a specific Nomis dataset with filters for geography, time period, variable, and measure type. Returns statistical observations with values and metadata. Use nomis_search_datasets first to find dataset IDs and understand available dimensions.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageNoAge group code (dataset-specific).
sexNoSex filter: '5' (male), '6' (female), '7' (all).
timeNoTime period. Use 'latest' for most recent, or specific periods like '2024-01', '2023'. Multiple values can be comma-separated.
measuresNoMeasure type. Common values: '20100' (value/count), '20599' (percentage/variable), '20701' (confidence interval).
variableNoVariable code for what to measure (dataset-specific). E.g. '45' = Employment rate aged 16-64 in APS. Use nomis_search_datasets to discover available variables.
geographyNoGeography code. Common values: '2092957699' (England), '2013265925' (Great Britain), '2092957697' (United Kingdom). Multiple values can be comma-separated.
dataset_idYesDataset ID from search results (e.g. 'NM_17_5' for Annual Population Survey percentages, 'NM_1_1' for JSA claimants, 'NM_2010_1' for earnings).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds useful context about what the tool returns ('statistical observations with values and metadata'), but doesn't provide additional behavioral details like rate limits, authentication requirements, or pagination behavior that would be valuable beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured with two sentences: the first explains what the tool does and what it returns, the second provides crucial usage guidance. Every sentence earns its place with zero wasted words, making it highly efficient and front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a query tool with comprehensive annotations (read-only, idempotent, non-destructive) and 100% schema coverage, the description provides good contextual completeness. It explains the purpose, return format, and prerequisite workflow. The main gap is the lack of output schema, but the description does mention what gets returned ('statistical observations with values and metadata'), which partially compensates.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, all parameters are already documented in the input schema. The description mentions the filter categories (geography, time period, variable, measure type) but doesn't add specific syntax, format details, or examples beyond what the schema provides. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Query a specific Nomis dataset'), the resource ('dataset'), and the scope ('with filters for geography, time period, variable, and measure type'). It explicitly distinguishes from its sibling 'nomis_search_datasets' by stating that tool should be used first to find dataset IDs, establishing a clear workflow relationship.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Use nomis_search_datasets first to find dataset IDs and understand available dimensions'), creating a clear prerequisite relationship. It also implies when not to use it by suggesting the search tool should be used first for discovery purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

nomis-search-datasets-af0A
Read-onlyIdempotent
Inspect

Search available ONS Nomis datasets by keyword. Returns dataset IDs, names, descriptions, and metadata. Use the dataset ID (e.g. 'NM_17_5') with nomis_query_dataset to retrieve actual data. Covers employment, unemployment, population, earnings, benefits, and economic activity.

ParametersJSON Schema
NameRequiredDescriptionDefault
searchYesKeyword to search datasets (e.g. 'employment', 'population', 'earnings', 'unemployment', 'benefits').
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond this: it specifies the return format ('dataset IDs, names, descriptions, and metadata') and clarifies the tool's scope ('Covers employment, unemployment...'), which helps the agent understand what to expect. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: the first states the purpose and return format, the second provides critical usage guidance, and the third adds scope context. Each sentence earns its place with no wasted words, and key information is front-loaded (searching and metadata return).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema) and rich annotations (readOnly, idempotent, non-destructive), the description is largely complete. It covers purpose, usage vs. alternatives, return format, and scope. The only minor gap is lack of explicit mention of pagination or result limits, but annotations and simplicity mitigate this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the 'search' parameter fully documented in the schema. The description does not add any parameter-specific details beyond what the schema provides (e.g., no additional syntax or format guidance). According to the rules, when schema coverage is high (>80%), the baseline score is 3, as the schema carries the primary burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search available ONS Nomis datasets by keyword') and resource ('datasets'), distinguishing it from siblings like 'nomis-query-dataset-af0' (which retrieves actual data) and 'ons-search-datasets-5b5' (which searches ONS datasets, not Nomis). It explicitly mentions coverage areas (employment, unemployment, etc.), making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs. alternatives: it states 'Use the dataset ID... with nomis_query_dataset to retrieve actual data,' clearly indicating this tool is for searching metadata only, not for data retrieval. This directly addresses the key alternative (nomis-query-dataset-af0) and sets clear boundaries for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ons-get-dataset-5b5A
Read-onlyIdempotent
Inspect

Get detailed metadata for a specific ONS dataset by its ID. Returns title, description, contacts, methodology links, release frequency, and links to editions and latest version.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataset_idYesDataset identifier (e.g. 'wellbeing-quarterly', 'cpih01', 'mid-year-pop-est'). Use ons_search_datasets to find IDs.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating a safe, repeatable read operation. The description adds value by specifying the return content (e.g., methodology links, release frequency) and clarifying that it fetches metadata for a specific dataset, which provides useful context beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action and resource, followed by a clear list of returned metadata. Every part adds value without redundancy, making it appropriately sized and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no nested objects), rich annotations (covering safety and idempotency), and high schema coverage, the description is largely complete. It specifies the return content, which compensates for the lack of an output schema. However, it could briefly mention error handling or data format for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter dataset_id well-documented in the schema (including examples and a reference to ons_search_datasets). The description does not add any additional parameter details beyond what the schema provides, so it meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get detailed metadata'), resource ('ONS dataset'), and scope ('by its ID'), distinguishing it from siblings like ons_list_datasets (list) and ons_search_datasets (search). It explicitly mentions what is returned (title, description, contacts, etc.), providing a precise purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'by its ID' and referencing ons_search_datasets in the schema to find IDs, but it does not explicitly state when to use this tool versus alternatives like ons_list_datasets or ons_search_datasets. It provides clear context but lacks explicit exclusions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ons-get-dataset-6f5A
Read-onlyIdempotent
Inspect

Get detailed metadata for a specific ONS dataset by its ID. Returns the dataset title, description, contacts, methodologies, related datasets, release frequency, and links to editions and the latest version.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataset_idYesDataset identifier (e.g. 'cpih01', 'mid-year-pop-est', 'wellbeing-quarterly')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating a safe, repeatable read operation. The description adds value by specifying the return content (e.g., contacts, methodologies, links), which isn't covered by annotations, but doesn't detail rate limits, auth needs, or pagination. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the purpose and efficiently lists return details without unnecessary words. Every part adds value, making it appropriately concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no nested objects), rich annotations (read-only, idempotent), and no output schema, the description is mostly complete. It specifies return content, but could benefit from mentioning error handling or format details, though not critical for this simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the 'dataset_id' parameter fully documented in the schema. The description doesn't add extra parameter details beyond implying it's for a specific dataset, so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('detailed metadata for a specific ONS dataset by its ID'), specifying it returns title, description, contacts, methodologies, related datasets, release frequency, and links. It distinguishes from siblings like 'ons-list-datasets-6f5' (list datasets) and 'ons-get-observations-6f5' (get data observations).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when detailed metadata for a specific dataset is needed, based on its ID. It doesn't explicitly state when not to use it or name alternatives, but the context suggests it's for metadata retrieval rather than listing or searching datasets, which is clear from the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ons-get-observations-6f5A
Read-onlyIdempotent
Inspect

Get data observations from a specific ONS dataset version. Requires dimension filters as a query string — all dimensions must be specified. Use ons_list_versions to see which dimensions a dataset has, and use the dimension options links to discover valid filter codes. Example dimension_filters: 'time=Jan-26&geography=K02000001&aggregate=cpih1dim1A0'.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of observations to return
offsetNoNumber of observations to skip
editionYesEdition name (e.g. 'time-series')
versionYesVersion number or 'latest' (e.g. '67', 'latest')
dataset_idYesDataset identifier (e.g. 'cpih01')
dimension_filtersYesURL query string of dimension filters. All dimensions of the dataset must be included. Use the list_versions endpoint to see available dimensions, then the dimension options to find valid codes. Example: 'time=Jan-26&geography=K02000001&aggregate=cpih1dim1A0'
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering basic safety. The description adds valuable context about the requirement to specify all dimensions and provides an example format for dimension_filters, which goes beyond what annotations provide. No contradictions exist between description and annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by specific requirements and usage guidance, then concludes with a concrete example. Every sentence serves a distinct purpose—no wasted words, and the structure flows logically from general to specific.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no output schema) and rich annotations, the description is largely complete. It covers purpose, prerequisites, and usage context effectively. However, it doesn't detail the response format or potential limitations like pagination behavior, leaving a minor gap despite the annotations providing safety assurances.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all parameters. The description adds minimal extra semantics by emphasizing that dimension_filters must include all dimensions and providing an example, but doesn't significantly enhance understanding beyond the schema's thorough descriptions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get data observations') and target resource ('from a specific ONS dataset version'), distinguishing it from sibling tools like ons-list-datasets or ons-search. It specifies the exact scope (observations with dimension filters) rather than just listing datasets or metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Requires dimension filters as a query string — all dimensions must be specified') and provides clear alternatives ('Use ons_list_versions to see which dimensions a dataset has, and use the dimension options links to discover valid filter codes'). This gives comprehensive guidance on prerequisites and related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ons-list-datasets-6f5A
Read-onlyIdempotent
Inspect

List available datasets from the UK Office for National Statistics. Returns dataset IDs, titles, descriptions, and links to editions. Use this to discover what statistical datasets are available (e.g. CPIH inflation, GDP, population estimates).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of datasets to return (max 100)
offsetNoNumber of items to skip (for pagination)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds value by specifying the return content ('dataset IDs, titles, descriptions, and links to editions') and implying it's a discovery tool, which provides useful context beyond the annotations. However, it doesn't mention rate limits or authentication needs, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: the first states the purpose and return values, and the second provides usage guidance with examples. Every sentence adds value without redundancy, making it efficiently structured and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 parameters, no output schema), good annotations, and 100% schema coverage, the description is mostly complete. It covers purpose, usage, and return content, but lacks details on error handling or response format, which could be helpful since there's no output schema. Still, it provides sufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters ('limit' and 'offset') well-documented in the schema. The description doesn't add any parameter-specific details beyond what the schema provides, so it meets the baseline of 3 where the schema does the heavy lifting for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List available datasets'), resource ('from the UK Office for National Statistics'), and scope ('available datasets'), with specific examples like 'CPIH inflation, GDP, population estimates' that help distinguish it from sibling tools like 'ons-search-datasets-5b5' or 'ons-get-dataset-6f5' which likely retrieve specific datasets rather than listing all available ones.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states 'Use this to discover what statistical datasets are available', providing clear context for when to use this tool versus alternatives like 'ons-search-6f5' (for searching) or 'ons-get-dataset-6f5' (for retrieving a specific dataset). This guidance helps the agent choose this tool for initial exploration rather than targeted queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ons-list-editions-5b5A
Read-onlyIdempotent
Inspect

List available editions for a specific ONS dataset. Each edition represents a version track (e.g. 'time-series'). Returns edition names with links to their versions.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataset_idYesDataset identifier (e.g. 'wellbeing-quarterly', 'cpih01'). Use ons_search_datasets or ons_get_dataset to find IDs.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds useful context about what the tool returns ('edition names with links to their versions'), which isn't in annotations. However, it doesn't mention rate limits, authentication needs, or pagination behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences that are front-loaded with the core purpose and follow with return details. Every sentence adds value without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema) and rich annotations, the description is mostly complete. It explains the purpose and return format adequately. However, it could benefit from clarifying the relationship with the similar sibling ons-list-editions-6f5 for full contextual coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with dataset_id fully documented in the schema. The description doesn't add any parameter-specific details beyond what the schema provides, so it meets the baseline of 3 for high schema coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List available editions'), target resource ('for a specific ONS dataset'), and scope ('Each edition represents a version track'). It distinguishes from siblings like ons-list-datasets-6f5 (lists datasets) and ons-list-versions-6f5 (lists versions within editions).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing edition-level metadata for a known dataset, and the input schema's description for dataset_id references ons_search_datasets or ons_get_dataset as alternatives for finding IDs. However, it doesn't explicitly state when not to use this tool or compare it directly to siblings like ons-list-editions-6f5 (which appears identical).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ons-list-editions-6f5A
Read-onlyIdempotent
Inspect

List available editions of an ONS dataset. Most datasets have a single 'time-series' edition. Returns edition names and links to their versions.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of editions to return
offsetNoNumber of items to skip
dataset_idYesDataset identifier (e.g. 'cpih01')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond this by noting that 'Most datasets have a single 'time-series' edition' and specifying the return format ('Returns edition names and links to their versions'), which helps set expectations without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste: the first states the purpose and context, the second specifies the return format. It is front-loaded with the core function and efficiently conveys necessary information without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (list operation with pagination), rich annotations (read-only, idempotent, non-destructive), and 100% schema coverage, the description is largely complete. It adds useful context about typical dataset editions and return format. A minor gap is the lack of output schema, but the description partially compensates by specifying return content.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for dataset_id, limit, and offset parameters. The description does not add any parameter-specific details beyond what the schema provides, such as examples for dataset_id beyond the schema's 'e.g. 'cpih01''. Baseline 3 is appropriate when the schema fully documents parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List available editions'), resource ('of an ONS dataset'), and scope ('Most datasets have a single 'time-series' edition'), distinguishing it from siblings like ons-list-datasets-6f5 or ons-list-versions-6f5. It precisely communicates what the tool does without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning 'Most datasets have a single 'time-series' edition', which suggests this tool is for exploring dataset editions. However, it lacks explicit guidance on when to use this versus alternatives like ons-list-versions-6f5 or ons-get-dataset-6f5, and does not specify prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ons-list-versions-6f5A
Read-onlyIdempotent
Inspect

List versions of a specific ONS dataset edition. Each version is a numbered data release. The highest version number is the most recent. Returns version numbers, dimensions, download links, and release dates.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of versions to return
offsetNoNumber of items to skip
editionYesEdition name (typically 'time-series')
dataset_idYesDataset identifier (e.g. 'cpih01')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds useful context about what 'versions' are ('numbered data release') and return content ('version numbers, dimensions, download links, and release dates'), but doesn't mention pagination behavior (implied by limit/offset parameters) or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: first states the purpose, second explains what versions are, third specifies return content. Every sentence adds value with zero waste, and it's front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only list tool with good annotations (readOnlyHint, idempotentHint) and full schema coverage, the description is mostly complete. It explains the resource context and return values, though it doesn't explicitly mention pagination (handled by limit/offset) or error cases. No output schema exists, so describing return content is helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 4 parameters. The description doesn't add any parameter-specific details beyond what's in the schema, but it implicitly reinforces the purpose of dataset_id and edition as identifiers for the target resource. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List versions'), target resource ('ONS dataset edition'), and scope ('specific ONS dataset edition'). It distinguishes from sibling tools like 'ons-list-datasets-6f5' (lists datasets) and 'ons-list-editions-6f5' (lists editions) by focusing on versions within an edition.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('List versions of a specific ONS dataset edition'), but doesn't explicitly state when not to use it or name alternatives. It implies usage by specifying the required parameters (dataset_id and edition), though it doesn't compare with other version-related tools if they exist.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ons-search-6f5A
Read-only
Inspect

Search the Office for National Statistics for datasets, bulletins, articles, and time series. Returns matching content with titles, descriptions, and links. Use the content_type filter to narrow results to specific types (e.g. 'dataset' for only datasets).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return
queryYesSearch query (e.g. 'inflation', 'GDP', 'population')
offsetNoNumber of results to skip
content_typeNoFilter by content type. Options include: 'dataset', 'dataset_landing_page', 'bulletin', 'article', 'timeseries', 'release', 'static_adhoc'
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=false, and destructiveHint=false, which cover safety and idempotency. The description adds useful context about what the tool returns ('Returns matching content with titles, descriptions, and links') and the content_type filter's purpose, but does not disclose additional behavioral traits like rate limits, authentication needs, or pagination behavior beyond the schema's offset/limit parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences: the first states the purpose and return format, and the second provides a usage tip for filtering. Every sentence adds value without redundancy, and it is front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with filtering), rich annotations, and full schema coverage, the description is mostly complete. It explains what is searched and returned, but lacks output schema details (e.g., result structure) and does not fully address behavioral aspects like error handling or performance, though annotations cover key safety traits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal value by mentioning the content_type filter with examples ('e.g. 'dataset' for only datasets'), but does not provide additional semantic context beyond what the schema already specifies for query, limit, or offset.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search the Office for National Statistics') and resources ('datasets, bulletins, articles, and time series'), and distinguishes it from sibling tools like 'ons-list-datasets-6f5' or 'ons-search-datasets-5b5' by mentioning it searches across multiple content types rather than just datasets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool ('Search the Office for National Statistics for datasets, bulletins, articles, and time series') and includes a usage tip ('Use the content_type filter to narrow results to specific types'), but it does not explicitly state when not to use it or name specific alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ons-search-datasets-5b5A
Read-onlyIdempotent
Inspect

Search the Office for National Statistics for datasets and content by keyword. Returns matching datasets, articles, and bulletins with titles, descriptions, and links.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results (default 10, max 50).
queryYesSearch query (e.g. 'population', 'housing', 'inflation', 'employment').
offsetNoNumber of results to skip for pagination (default 0).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context by specifying the types of content returned ('datasets, articles, and bulletins') and the fields included ('titles, descriptions, and links'), which goes beyond the annotations to clarify output behavior without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose, method, and return types without unnecessary words. It is front-loaded with the core action and resource, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with pagination), rich annotations (read-only, idempotent), and full schema coverage, the description is largely complete. However, the lack of an output schema means the description could benefit from more detail on result structure or error handling, though it adequately covers the essentials for a search operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for 'query', 'limit', and 'offset' parameters. The description does not add any additional parameter semantics beyond what the schema provides, such as search syntax or result ordering details, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search'), target resource ('Office for National Statistics for datasets and content'), and scope ('by keyword'), distinguishing it from sibling tools like 'ons-list-datasets-6f5' or 'ons-get-dataset-5b5' which perform listing or retrieval operations rather than keyword-based search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying it searches 'by keyword' and returns 'matching datasets, articles, and bulletins', but does not explicitly state when to use this tool versus alternatives like 'ons-search-6f5' or 'ons-list-datasets-6f5'. The guidance is functional but lacks explicit comparison or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

parl-get-member-360A
Read-onlyIdempotent
Inspect

Get full details for one UK Parliament member by ID: names, party, house membership, and thumbnail URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
member_idYesParliament member ID (from search or external references).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating safe, repeatable reads. The description adds context by specifying the scope ('full details' vs. synopsis) and output fields, which helps the agent understand what data to expect. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys purpose, resource, and output fields without unnecessary words. It is front-loaded with the core action and provides all essential information concisely.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, no output schema), annotations cover safety and idempotency, and the description specifies output fields and scope. However, without an output schema, the description could benefit from more detail on return format (e.g., structure of 'full details'), but it is largely complete for a read-only fetch tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'member_id' fully documented in the schema. The description does not add any additional semantic details beyond what the schema provides (e.g., format examples or constraints), so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get full details'), the resource ('one UK Parliament member'), and the scope ('by ID') with explicit output fields ('names, party, house membership, and thumbnail URL'). It distinguishes from sibling tools like 'parl-get-member-synopsis-360' by specifying 'full details' versus likely a summary.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage ('by ID') and implies when to use it (for detailed member information). However, it does not explicitly state when not to use it or name alternatives (e.g., 'parl-search-members-360' for searching instead of fetching by ID), which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

parl-get-member-synopsis-360A
Read-onlyIdempotent
Inspect

Get the official plain-language synopsis for a member (HTML string with links). Use after resolving an ID via member search.

ParametersJSON Schema
NameRequiredDescriptionDefault
member_idYesParliament member ID.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds context about the output format ('HTML string with links'), which is useful beyond annotations, but does not detail rate limits, authentication needs, or error behaviors. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences that are front-loaded with the core purpose and followed by usage guidance. Every word earns its place with no redundancy or fluff, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with one parameter and good annotations, the description is mostly complete. It clarifies the output format (HTML with links) and usage context, though lacks details on response structure or error handling, which would be beneficial given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'member_id' fully documented in the schema as 'Parliament member ID'. The description does not add further meaning or syntax details beyond this, so it meets the baseline for high schema coverage without compensating value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'official plain-language synopsis for a member', specifying it returns an HTML string with links. It distinguishes from sibling 'parl-get-member-360' by focusing on synopsis rather than general member data, though not explicitly contrasting other parliament tools like search functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states 'Use after resolving an ID via member search', providing clear when-to-use guidance by naming the prerequisite step ('member search') and indicating this tool is for post-resolution retrieval. This helps differentiate from sibling tools like 'parl-search-members-360'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

parl-search-constituencies-360A
Read-onlyIdempotent
Inspect

Search UK Westminster constituencies by name. Results may include current MP representation where available.

ParametersJSON Schema
NameRequiredDescriptionDefault
skipNoNumber of results to skip for pagination (default 0).
takeNoPage size (default 5, max 20).
search_textYesConstituency name or fragment to search.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety aspects. The description adds valuable behavioral context beyond annotations by specifying that results 'may include current MP representation where available', which helps set expectations about partial data availability. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two concise sentences that are front-loaded with the core purpose and efficiently add supplementary context about MP representation. Every word serves a purpose with zero wasted text or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with comprehensive annotations and full schema coverage, the description provides adequate context about what's being searched and what data might be included. However, without an output schema, it could benefit from more detail about the structure of returned results beyond the mention of MP representation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, all three parameters are well-documented in the schema itself. The description doesn't add any parameter-specific information beyond what's already in the schema descriptions, so it meets the baseline expectation without providing extra semantic value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search'), resource ('UK Westminster constituencies'), and scope ('by name'), distinguishing it from sibling tools like 'parl-search-members-360' which searches for members rather than constituencies. It also adds valuable context about including MP representation where available.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'UK Westminster constituencies' and 'by name', but doesn't explicitly state when to use this tool versus alternatives like 'ec-search-parl-results-b3f' or provide exclusion criteria. The guidance is clear but lacks explicit comparison with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

parl-search-members-360A
Read-onlyIdempotent
Inspect

Search UK Parliament members (Commons and Lords) by name. Returns a paginated list with party, house, and membership status.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesName text to search (e.g. surname or full name).
skipNoNumber of results to skip for pagination (default 0).
takeNoPage size (default 5, max 20).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating safe, repeatable operations. The description adds value by specifying the return content ('party, house, and membership status') and pagination behavior, which annotations do not cover. No contradictions exist between description and annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('Search UK Parliament members') and includes essential details (scope, return content, pagination). Every word contributes meaning without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 3 parameters, 100% schema coverage, and annotations covering safety, the description provides adequate context by specifying scope and return fields. However, without an output schema, it could benefit from more detail on result structure or error handling, slightly limiting completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for 'name', 'skip', and 'take' parameters. The description mentions 'search by name' and 'paginated list', aligning with the schema but not adding significant meaning beyond it. Baseline score of 3 is appropriate as the schema carries the primary documentation burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Search'), resource ('UK Parliament members'), and scope ('Commons and Lords'), making the purpose specific. It distinguishes from sibling tools like 'parl-get-member-360' (which retrieves a specific member) by emphasizing search functionality and paginated results.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching members by name, but does not explicitly state when to use this tool versus alternatives like 'parl-get-member-360' (for detailed info on a known member) or 'parl-search-constituencies-360' (for constituency searches). No exclusions or prerequisites are mentioned, leaving usage context partially inferred.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tna-get-archive-7d6A
Read-onlyIdempotent
Inspect

Retrieve detailed information for a specific archive, library, or repository from the ARCHON directory. Returns contact details (address, phone, email, website), opening hours, access requirements, disabled access information, and a list of record collections held. Use the 'id' field from archive search results.

ParametersJSON Schema
NameRequiredDescriptionDefault
archive_idYesArchive/repository ID (e.g. 'A13532670'). Use the 'id' field from archive search results.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, read-only operation. The description adds useful context beyond annotations by specifying the return data (contact details, opening hours, etc.), but does not disclose behavioral traits like rate limits, authentication needs, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences: the first states the purpose and return data, the second provides usage guidance. It is front-loaded with key information, though the list of return details could be slightly condensed for better flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema), the description is mostly complete. It explains what the tool does, what it returns, and how to use the parameter. However, it lacks details on output format (e.g., JSON structure) or error cases, which would be helpful since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'archive_id' fully documented in the schema. The description adds minimal value by repeating 'Use the 'id' field from archive search results', which is already in the schema description. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'retrieve' and the resource 'detailed information for a specific archive, library, or repository from the ARCHON directory'. It distinguishes from sibling tools like 'tna-search-archives-7d6' (search) and 'tna-get-record-7d6' (get records vs. archives) by specifying it's for archive details, not searching or handling records.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool: 'Use the 'id' field from archive search results', implying it should be used after a search to get details for a specific archive. However, it does not explicitly state when not to use it or name alternatives (e.g., 'tna-search-archives-7d6' for searching instead of retrieving details).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tna-get-record-7d6A
Read-onlyIdempotent
Inspect

Retrieve detailed metadata for a specific record from The National Archives catalogue. Returns extended information including scope and content, legal status, creator names, citable reference, related material, and access conditions. Use the record 'id' field (e.g. 'C7394009') from search results.

ParametersJSON Schema
NameRequiredDescriptionDefault
record_idYesRecord ID from the catalogue (e.g. 'C7394009'). Use the 'id' field from search results.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, and non-destructive behavior, which the description does not repeat. The description adds valuable context beyond annotations by specifying the type of metadata returned (e.g., scope and content, legal status) and clarifying the source of the record_id ('from search results'), enhancing the agent's understanding of the tool's behavior and output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by specific details and usage guidance. Every sentence adds value: the first defines the action and resource, the second lists metadata fields, and the third provides parameter context. No wasted words, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema), rich annotations (read-only, idempotent), and high schema coverage, the description is mostly complete. It covers purpose, usage, and output details. However, without an output schema, it could benefit from more specifics on return format (e.g., JSON structure), slightly limiting completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting the single required parameter 'record_id' with examples. The description adds minimal semantics by reinforcing the parameter's purpose ('from search results') and providing an example, but this is largely redundant with the schema. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Retrieve detailed metadata'), target resource ('a specific record from The National Archives catalogue'), and distinguishes it from siblings by specifying it returns 'extended information' for a single record, unlike search tools like 'tna-search-records-7d6' that return multiple results. It provides concrete examples of metadata fields (e.g., scope and content, legal status).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use the record 'id' field (e.g., 'C7394009') from search results,' implying it should be used after a search to get detailed metadata for a specific record. However, it does not explicitly mention when not to use it or name alternatives (e.g., 'tna-search-records-7d6' for searching), which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tna-search-archives-7d6A
Read-onlyIdempotent
Inspect

Search the ARCHON directory of 2,500+ archives, libraries, museums, and repositories across the UK. Returns repository names, addresses, and identifiers. Use the repository 'id' from results with the get_archive tool for full contact details, opening hours, and access information.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (1-indexed).
queryYesSearch query for archives and repositories (e.g. 'london', 'university', 'military').
page_sizeNoNumber of results per page (1-100).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond this: it specifies the data source ('ARCHON directory'), the scale ('2,500+ archives, libraries, museums, and repositories across the UK'), and the output format ('repository names, addresses, and identifiers'). However, it doesn't mention potential limitations like rate limits or authentication needs, which could be relevant for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose and output, followed by a crucial usage guideline linking to a sibling tool. Every sentence adds essential information with zero waste, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with pagination), rich annotations (read-only, idempotent, non-destructive), and no output schema, the description is mostly complete. It covers purpose, scope, output format, and sibling integration. However, it lacks details on response structure (e.g., pagination metadata) or error handling, which could be helpful for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for all three parameters (query, page, page_size). The description doesn't add any parameter-specific details beyond what the schema provides, such as search syntax examples or pagination behavior. This meets the baseline of 3 when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search the ARCHON directory'), resource ('2,500+ archives, libraries, museums, and repositories across the UK'), and output ('Returns repository names, addresses, and identifiers'). It explicitly distinguishes from its sibling tool 'tna-get-archive-7d6' by mentioning how to use the 'id' from results for more details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Search the ARCHON directory') and when to use an alternative ('Use the repository 'id' from results with the get_archive tool for full contact details, opening hours, and access information'). This clearly defines the tool's scope relative to its sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tna-search-records-7d6A
Read-onlyIdempotent
Inspect

Search The National Archives Discovery catalogue for archival records. Covers 32+ million record descriptions spanning 1,000+ years of UK government and public records. Supports filtering by date range and department. Returns records with titles, references, dates, descriptions, and holding institutions.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (1-indexed).
queryYesSearch query for archive records (e.g. 'world war', 'magna carta', 'census 1841').
date_toNoEnd date filter in YYYY-MM-DD format (e.g. '1918-12-31').
date_fromNoStart date filter in YYYY-MM-DD format (e.g. '1914-01-01').
page_sizeNoNumber of results per page (1-100).
departmentNoFilter by archive department code (e.g. 'WO' for War Office, 'FO' for Foreign Office, 'HO' for Home Office).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering basic safety and idempotency. The description adds valuable context beyond annotations by specifying the coverage ('32+ million records spanning 1,000+ years'), supported filters ('date range and department'), and return format ('titles, references, dates, descriptions, and holding institutions'). It doesn't mention rate limits or authentication requirements, but provides useful behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured in three sentences: first establishes purpose and scope, second specifies filtering capabilities, third details return format. Every sentence earns its place with zero wasted words, making it front-loaded and efficiently informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with comprehensive annotations (read-only, idempotent, non-destructive) and full parameter documentation, the description provides excellent context about coverage, filtering, and return format. The main gap is the absence of an output schema, but the description compensates by listing what fields are returned. It could benefit from mentioning pagination behavior or result limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, all 6 parameters are well-documented in the input schema. The description mentions filtering by 'date range and department' which aligns with parameters 'date_from', 'date_to', and 'department', but adds no additional semantic context beyond what the schema provides. The baseline score of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search'), resource ('The National Archives Discovery catalogue for archival records'), and scope ('32+ million record descriptions spanning 1,000+ years of UK government and public records'). It distinguishes itself from sibling tools like 'tna-get-archive-7d6' and 'tna-get-record-7d6' by emphasizing search functionality rather than retrieval of specific items.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching archival records with filtering capabilities, but provides no explicit guidance on when to use this tool versus alternatives like 'tna-search-archives-7d6' or 'tna-get-record-7d6'. There's no mention of prerequisites, limitations, or comparative scenarios that would help an agent choose between similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk-police-crime-categories-5faA
Read-onlyIdempotent
Inspect

List all valid crime categories used in UK Police crime data.

Returns the canonical set of crime category identifiers and their human-readable names (e.g. 'anti-social-behaviour', 'burglary', 'violent-crime'). Use these identifiers to filter street-level crime queries.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoMonth in YYYY-MM format to get categories for. Omit for the latest available.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, and non-destructive behavior. The description adds value by specifying the return format ('canonical set of crime category identifiers and their human-readable names') and the optional date parameter usage, which provides useful context beyond annotations. No contradiction with annotations is present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose and followed by usage guidance. Every sentence adds value without redundancy, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 required parameters, no output schema), the description is mostly complete. It covers purpose, return format, and usage context. However, it could slightly improve by mentioning the optional date parameter's effect more explicitly, but annotations and schema provide sufficient support for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'date' parameter documented as optional for month-specific categories. The description mentions using identifiers to filter queries but does not add detailed semantics beyond the schema, such as format examples or default behavior when omitted, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('all valid crime categories used in UK Police crime data'), specifying it returns identifiers and names. It distinguishes from siblings like 'uk-police-street-crimes-5fa' by focusing on metadata rather than actual crime data, making the purpose specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: to get canonical crime categories for filtering street-level crime queries. It implicitly distinguishes from siblings by its metadata focus, but does not explicitly state when not to use it or name alternatives, such as not using it for actual crime data retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk-police-crimes-at-location-5faA
Read-onlyIdempotent
Inspect

Retrieve all crimes at a specific named location by its location ID.

Use a location_id obtained from street-level crime results (the street.id field) to get all crimes snapped to that same anonymous map point.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoMonth to query in YYYY-MM format (e.g. 2024-01)
location_idYesLocation ID from street-level crime data (the street.id field)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context about the data source ('street-level crime results') and the mapping behavior ('snapped to that same anonymous map point'), which helps the agent understand how location data is processed beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences with zero waste. The first sentence states the purpose, and the second provides usage guidance, both front-loaded and efficiently structured without unnecessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, 1 required), rich annotations (readOnly, idempotent, non-destructive), and 100% schema coverage, the description is largely complete. It explains the tool's purpose, usage, and data context well. A minor gap is the lack of output schema, but the description compensates by clarifying the data source and mapping behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters clearly documented in the schema. The description reinforces the purpose of 'location_id' ('from street-level crime data') but doesn't add significant semantic details beyond what the schema already states. This meets the baseline of 3 when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Retrieve all crimes') and resource ('at a specific named location by its location ID'), making the purpose specific. It distinguishes from sibling tools like 'uk-police-street-crimes-5fa' by focusing on crimes at a pre-identified location ID rather than street-level queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Use a location_id obtained from street-level crime results') and provides context on the data source ('the street.id field'). It differentiates from alternatives by specifying this tool is for crimes 'snapped to that same anonymous map point' rather than other crime-related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk-police-get-force-5faA
Read-onlyIdempotent
Inspect

Get detailed information about a specific police force.

Returns the force's name, description, website URL, telephone number, and engagement methods (social media, RSS feeds, etc.). Use a force_id from the list forces tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
force_idYesForce identifier (e.g. 'leicestershire', 'metropolitan', 'city-of-london')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering the basic safety profile. The description adds valuable context by specifying the return data structure (force's name, description, website URL, telephone number, engagement methods) and the prerequisite relationship with the list forces tool. It doesn't contradict annotations and provides useful behavioral information beyond what annotations offer.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each serve distinct purposes: the first states the tool's purpose and return values, the second provides crucial usage guidance. There is zero wasted language, and the information is front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read-only tool with comprehensive annotations and no output schema, the description provides excellent context: it explains what data is returned, specifies the prerequisite relationship with another tool, and clearly states the tool's purpose. The only minor gap is not explicitly mentioning that this is a read-only operation, though annotations cover this. Overall, it's highly complete for its complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the force_id parameter fully documented in the schema. The description adds minimal value beyond the schema by mentioning 'force_id' but doesn't provide additional semantic context. With complete schema coverage, the baseline score of 3 is appropriate as the schema carries the parameter documentation burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get detailed information') and resource ('about a specific police force'). It distinguishes from its sibling 'uk-police-list-forces-5fa' by specifying this is for detailed information about a single force rather than listing all forces.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Use a force_id from the list forces tool.' This directly references the sibling tool 'uk-police-list-forces-5fa' as the prerequisite source for the required parameter, creating clear usage context and distinguishing from alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk-police-list-forces-5faA
Read-onlyIdempotent
Inspect

List all territorial police forces in England, Wales, and Northern Ireland.

Returns each force's ID and name. Use the force ID with the force details tool to get contact information, engagement methods, and other metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context about the return format ('Returns each force's ID and name') and the relationship to another tool, which enhances understanding beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states purpose and scope, the second explains output and next steps. It's front-loaded with essential information and efficiently structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema), rich annotations, and clear sibling differentiation, the description is complete. It covers purpose, usage, output format, and integration with other tools, leaving no gaps for this simple list operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is 4. The description appropriately notes no parameters are needed ('List all...') and doesn't add unnecessary details, aligning with the empty input schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List all territorial police forces') and resource ('in England, Wales, and Northern Ireland'), distinguishing it from sibling tools like 'uk-police-get-force-5fa' which retrieves details for a specific force. It precisely defines scope and output format.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides when to use this tool ('List all...') and when to use an alternative ('Use the force ID with the force details tool...'), naming the specific sibling tool 'uk-police-get-force-5fa' for detailed metadata. This gives clear guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk-police-outcomes-at-location-5faA
Read-onlyIdempotent
Inspect

Retrieve crime outcomes near a geographic location for a given month.

Returns outcome/resolution data for crimes including the outcome category (e.g. 'Investigation complete; no suspect identified', 'Offender given a caution'), date, and the associated crime details.

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude of the location to search around
lngYesLongitude of the location to search around
dateNoMonth to query in YYYY-MM format (e.g. 2024-01)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds value by specifying the return content ('outcome/resolution data for crimes including the outcome category, date, and associated crime details'), which is not covered by annotations, enhancing behavioral understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and parameters, and the second details the return data. Every sentence adds essential information without redundancy, making it front-loaded and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations, and 100% schema coverage, the description is largely complete. It explains the purpose and return data, though it lacks output schema details. However, with no output schema provided, the description compensates adequately by specifying return content, leaving minor gaps in usage guidelines.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for lat, lng, and date parameters. The description adds minimal semantic context by mentioning 'geographic location' and 'month,' but does not provide additional details beyond what the schema already covers, aligning with the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Retrieve crime outcomes'), resource ('near a geographic location'), and scope ('for a given month'). It distinguishes from sibling tools like 'uk-police-crimes-at-location-5fa' by focusing on outcomes rather than crimes themselves, providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'near a geographic location for a given month,' but does not explicitly state when to use this tool versus alternatives like 'uk-police-crimes-at-location-5fa' or other sibling tools. No exclusions or prerequisites are mentioned, leaving usage guidance incomplete.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk-police-stop-and-search-5faA
Read-onlyIdempotent
Inspect

Search stop and search records near a geographic location for a given month.

Returns data on police stop-and-search encounters including the person's age range, gender, ethnicity, the object of search, outcome, and whether clothing removal was required. Not all forces provide stop-and-search data.

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude of the location to search around
lngYesLongitude of the location to search around
dateNoMonth to query in YYYY-MM format (e.g. 2024-01)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond annotations: it specifies the return data fields (age range, gender, ethnicity, etc.), mentions that not all forces provide data (a limitation), and clarifies the geographic and temporal scope. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and parameters, the second details the return data and a key limitation. Every sentence adds essential information with zero wasted words, making it front-loaded and highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema), the description is largely complete: it covers purpose, parameters, return data, and a limitation. However, it lacks details on response format, pagination, or error handling, which would be helpful for full completeness. Annotations provide safety context, compensating somewhat.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for lat, lng, and date parameters. The description adds context by explaining that these parameters define 'a geographic location' and 'a given month', but does not provide additional syntax, format details, or constraints beyond what the schema already documents. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search stop and search records'), target resource ('near a geographic location for a given month'), and distinguishes from siblings by focusing on stop-and-search data rather than crimes, forces, or other police data. It explicitly mentions what data is returned, making the purpose highly specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying geographic and temporal parameters, but does not explicitly state when to use this tool versus alternatives like 'uk-police-street-crimes-5fa' or other police tools. The note about 'Not all forces provide stop-and-search data' offers some guidance but lacks explicit comparisons or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk-police-street-crimes-5faA
Read-onlyIdempotent
Inspect

Search street-level crime reports near a geographic location for a given month.

Returns anonymised crime data snapped to nearby street points, including crime category, location details, outcome status, and month. Coordinates are for England, Wales, and Northern Ireland.

High-crime areas may return thousands of records. Use a specific date and category to narrow results.

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude of the location to search around
lngYesLongitude of the location to search around
dateNoMonth to query in YYYY-MM format (e.g. 2024-01). Data lags ~2 months.
categoryNoCrime category filter (e.g. 'burglary', 'violent-crime'). Omit for all crimes.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, and non-destructive behavior, but the description adds valuable context beyond this: it specifies that data is 'anonymised' and 'snapped to nearby street points,' notes geographic limitations ('Coordinates are for England, Wales, and Northern Ireland'), and warns about data lag ('Data lags ~2 months'). This enhances understanding of the tool's operational behavior without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by supporting details in a logical flow (returns, geographic scope, usage tips). Each sentence adds value without redundancy, such as the warning about high-crime areas and narrowing strategies, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema) and rich annotations, the description is largely complete: it covers purpose, behavioral nuances, and usage tips. However, it lacks details on output structure (e.g., what 'anonymised crime data' includes beyond listed fields) and error handling, which could be useful since there's no output schema, leaving minor gaps in full contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents all four parameters (lat, lng, date, category). The description adds minimal semantic context, such as implying that 'date' and 'category' are optional filters to narrow results, but does not provide additional details like format examples beyond the schema's 'YYYY-MM' or explain parameter interactions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search street-level crime reports'), resource ('near a geographic location'), and scope ('for a given month'), distinguishing it from sibling tools like 'uk-police-crime-categories-5fa' or 'uk-police-crimes-at-location-5fa' by focusing on street-level data with geographic and temporal filtering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool (e.g., 'High-crime areas may return thousands of records. Use a specific date and category to narrow results'), offering practical guidance for narrowing results. However, it does not explicitly mention when not to use it or name specific alternatives among siblings, such as 'uk-police-crimes-at-location-5fa' for different crime data types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.