Skip to main content
Glama

UK Environmental Intelligence MCP Server from MCPBundles

Ownership verified

Server Details

Access UK flood warnings, river levels, water quality, Met Office forecasts, and carbon data

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
thinkchainai/mcpbundles
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 44 of 44 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation4/5

The tools are well-organized into distinct domains (carbon, EA bathing water, EA ecology, EA flood, EA groundwater, EA hydrology, EA rainfall, EA water quality, Met Office), with clear boundaries between domains. Within domains, tools are differentiated by resource and action (e.g., list_sites vs. get_site), though some overlap exists (e.g., ea-flood-areas-621 and ea-flood-get-areas-930 appear similar, and multiple flood warning tools could cause minor confusion).

Naming Consistency3/5

Naming follows a general pattern of prefix-action-resource (e.g., ea-bw-list-sites), but there is inconsistency in verb usage (e.g., 'get' vs. 'list' vs. 'search' for similar actions) and some deviations like 'carbon-generation-mix-7e3' (no verb) and 'mo-find-nearest-station-e8b' (verb-noun order differs). The mix of conventions reduces predictability, though prefixes help group related tools.

Tool Count2/5

With 44 tools, the count is excessive for a single server, making it cumbersome for agents to navigate. While the server covers broad environmental domains, the high number of tools could be split into multiple focused servers (e.g., separate servers for carbon, EA data, Met Office) to improve usability and reduce cognitive load.

Completeness5/5

The tool set provides comprehensive coverage across multiple environmental domains, including data retrieval for carbon intensity, bathing water quality, ecology, flood monitoring, groundwater, hydrology, rainfall, water quality, and weather forecasts/observations. Each domain offers CRUD-like operations (primarily read/list/get) with filtering options, and tools link together effectively (e.g., list sites then get details), leaving no obvious gaps for the intended purposes.

Available Tools

44 tools
carbon-generation-mix-7e3A
Read-onlyIdempotent
Inspect

Get the current electricity generation fuel mix for Great Britain — percentage breakdown by fuel type: biomass, coal, imports, gas, nuclear, other, hydro, solar, wind. Optionally filter by regional DNO area (1-14).

ParametersJSON Schema
NameRequiredDescriptionDefault
region_idNoOptional region ID (1-14) for regional generation mix. Omit for national mix.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating safe, repeatable read operations. The description adds useful context about the optional regional filtering and the specific fuel types returned, which helps the agent understand the tool's behavior beyond the annotations. No contradictions with annotations are present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the core purpose and efficiently lists fuel types and the optional filter. Every word contributes to clarity without redundancy, making it highly concise and effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 optional parameter, no output schema), the description is nearly complete. It covers the purpose, output format, and optional filtering. However, it lacks details on data freshness (e.g., 'current' could mean real-time or recent) and any potential limitations (e.g., update frequency), which would enhance completeness for a data retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'region_id' fully documented in the schema (optional integer 1-14 for regional mix). The description adds minimal value by mentioning the optional filter and regional DNO area, but does not provide additional semantics beyond what the schema already covers. With 0 required parameters, the baseline is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get'), resource ('electricity generation fuel mix for Great Britain'), and output format ('percentage breakdown by fuel type: biomass, coal, imports, gas, nuclear, other, hydro, solar, wind'). It distinguishes itself from sibling tools by focusing on generation mix rather than carbon intensity, flood data, or other environmental metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage by specifying the optional regional filter ('Optionally filter by regional DNO area (1-14)') and implying national vs. regional scope. However, it does not explicitly mention when to use this tool versus alternatives like carbon-intensity tools or other environmental data tools, though the distinction is reasonably inferable from the purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

carbon-intensity-current-7e3A
Read-onlyIdempotent
Inspect

Get the current carbon intensity (gCO2/kWh) of Great Britain's electricity grid for the current half-hour settlement period. Returns forecast and actual values plus an index (very low / low / moderate / high / very high). Optionally filter by one of 14 regional DNO areas.

ParametersJSON Schema
NameRequiredDescriptionDefault
region_idNoOptional region ID (1-14) to get regional intensity. 1=North Scotland, 2=South Scotland, 3=North West England, 4=North East England, 5=Yorkshire, 6=North Wales & Merseyside, 7=South Wales, 8=West Midlands, 9=East Midlands, 10=East England, 11=South West England, 12=South England, 13=London, 14=South East England. Omit for national average.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, establishing this as a safe, repeatable read operation. The description adds valuable behavioral context beyond annotations by specifying the temporal scope ('current half-hour settlement period'), the return structure ('forecast and actual values plus an index'), and the optional regional filtering capability. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by return details and optional parameter information. Every sentence earns its place with zero wasted words, making it highly efficient for agent comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single optional parameter, no output schema), the description provides excellent context about what the tool returns (forecast, actual, index) and when to use it. With annotations covering safety and idempotency, and schema fully documenting the parameter, the description fills the remaining gaps well. A perfect 5 would require explaining the index scale in more detail or providing output examples.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the region_id parameter fully documented in the schema including the numeric range and detailed region mapping. The description adds minimal value beyond the schema by mentioning 'Optionally filter by one of 14 regional DNO areas' but doesn't provide additional semantic context. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the current carbon intensity'), resource ('Great Britain's electricity grid'), and scope ('current half-hour settlement period'). It distinguishes from siblings by focusing on current intensity rather than generation mix or statistical data, with explicit mention of forecast/actual values and regional filtering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to get current carbon intensity values). It distinguishes from carbon-generation-mix-7e3 and carbon-intensity-stats-7e3 by focusing on current data rather than generation breakdowns or historical statistics. However, it doesn't explicitly state when NOT to use it or provide detailed alternatives beyond the sibling names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

carbon-intensity-stats-7e3A
Read-onlyIdempotent
Inspect

Get carbon intensity statistics (max, average, min) for a date range. Dates must be in YYYY-MM-DD format. The range should not exceed 30 days for reasonable response times.

ParametersJSON Schema
NameRequiredDescriptionDefault
date_toYesEnd date in YYYY-MM-DD format (e.g. '2026-03-31').
date_fromYesStart date in YYYY-MM-DD format (e.g. '2026-03-01').
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable behavioral context beyond annotations by specifying the 30-day range limit for reasonable response times, which is a performance constraint not captured in structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by essential constraints. Both sentences earn their place by providing critical information without redundancy, making it efficiently structured and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 simple parameters), rich annotations (read-only, idempotent, non-destructive), and no output schema, the description is mostly complete. It covers purpose, constraints, and format, but could benefit from clarifying the output format (e.g., what statistics are returned) since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters fully documented in the input schema (date_from and date_to with format examples). The description adds minimal value beyond the schema by reiterating the YYYY-MM-DD format but does not provide additional semantic context, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get'), resource ('carbon intensity statistics'), and scope ('max, average, min for a date range'). It distinguishes from siblings like 'carbon-intensity-current-7e3' (current data) and 'carbon-generation-mix-7e3' (generation mix), making the statistical focus explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage with date format requirements and a 30-day range limit for performance, but does not explicitly state when to use this tool versus alternatives like 'carbon-intensity-current-7e3' for real-time data. It offers practical constraints but lacks sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-bw-get-profile-052A
Read-onlyIdempotent
Inspect

Get the bathing water profile for a specific site. Profiles contain descriptive information about the beach including textual descriptions, applicable year, and version history. Multiple profile versions may exist across years.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (0-based)
eubwidYesEU bathing water ID notation (e.g. 'ukc2102-03600'). Get this from ea_bw_list_sites results.
page_sizeNoNumber of profile versions to return (default 10)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable behavioral context beyond annotations by explaining that profiles contain descriptive information, have version history across years, and that the tool can return multiple versions (implied pagination behavior). No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: first states the core purpose, second details what profiles contain, third explains versioning behavior. Every sentence adds essential information with zero wasted words, making it easy to parse and front-loaded with key details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (retrieving paginated profile data), rich annotations covering safety, and 100% schema coverage, the description is largely complete. It explains the resource nature, versioning, and data source for parameters. The main gap is lack of output schema, but the description compensates somewhat by detailing what profiles contain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing complete parameter documentation. The description adds minimal semantic context beyond the schema by mentioning that eubwid comes from 'ea_bw_list_sites results' and implying pagination for multiple profile versions. This meets the baseline for high schema coverage without significant additional parameter insight.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get') and resource ('bathing water profile for a specific site'), and distinguishes it from siblings by focusing on descriptive profile information rather than compliance, samples, or site listings. It explicitly mentions what profiles contain (textual descriptions, year, version history) and that multiple versions may exist.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to get profile information for a specific site) and implies an alternative by referencing 'ea_bw_list_sites results' for obtaining the required eubwid parameter. However, it doesn't explicitly state when not to use it or name specific sibling alternatives for different data needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-bw-get-site-052A
Read-onlyIdempotent
Inspect

Get detailed information about a specific bathing water site by its EU bathing water ID. Returns site name, location coordinates, district, water company, latest compliance classification, latest risk prediction, sediment type, year designated, and whether quality is impacted by heavy rain.

ParametersJSON Schema
NameRequiredDescriptionDefault
eubwidYesEU bathing water ID notation (e.g. 'ukc2102-03600'). Get this from ea_bw_list_sites results.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating safe, repeatable read operations. The description adds valuable context by specifying the exact data fields returned (e.g., site name, location coordinates, compliance classification), which goes beyond the annotations and helps the agent understand the output structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and lists all returned data fields without unnecessary words, making it easy for an agent to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, 100% schema coverage), rich annotations (read-only, idempotent, non-destructive), and no output schema, the description is mostly complete. It specifies the exact data returned, which compensates for the lack of output schema, though it could briefly mention error handling or data freshness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'eubwid' fully documented in the schema. The description adds minimal value beyond the schema by mentioning the ID notation example and referencing 'ea_bw_list_sites results', but does not provide additional syntax or format details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get detailed information'), resource ('about a specific bathing water site'), and scope ('by its EU bathing water ID'), and distinguishes it from sibling tools like 'ea-bw-list-sites-052' by focusing on individual site details rather than listing sites.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('by its EU bathing water ID') and implicitly suggests using 'ea_bw_list_sites results' for obtaining IDs, but does not explicitly state when not to use it or name alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-bw-list-compliance-052A
Read-onlyIdempotent
Inspect

List bathing water quality compliance assessments under the revised Bathing Water Directive (rBWD). Each assessment classifies water quality as Excellent, Good, Sufficient, or Poor based on 4-year rolling E. coli and intestinal enterococci statistics. Optionally filter by sampling point ID for a specific site's history.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (0-based)
page_sizeNoNumber of results per page (default 20, max 500)
sampling_point_idNoSampling point ID to filter results for a specific site (e.g. '03600'). This is the numeric suffix of the eubwid. Without this, returns compliance assessments across all sites.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds valuable behavioral context beyond annotations: it explains the 4-year rolling statistics basis for classifications and clarifies that without the sampling_point_id filter, it returns data across all sites. This enhances understanding of the tool's behavior without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the core purpose and classification system, the second explains the optional filtering. Every sentence adds essential information without redundancy, making it front-loaded and zero-waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only list tool with good annotations (readOnlyHint, idempotentHint) and full schema coverage, the description is largely complete. It covers purpose, usage context, and key behavioral details. The main gap is the lack of an output schema, but the description compensates by explaining what data is returned (compliance assessments with quality classifications). A 5 would require more detail on output structure or pagination behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters (page, page_size, sampling_point_id) well-documented in the schema. The description adds some semantic context by explaining that sampling_point_id filters for a specific site's history and is the 'numeric suffix of the eubwid', but this is marginal value. The baseline score of 3 is appropriate since the schema does most of the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List bathing water quality compliance assessments'), identifies the resource (assessments under rBWD), and distinguishes it from siblings by specifying the type of data (compliance assessments vs. samples, sites, or profiles in sibling tools). It goes beyond the title 'Listing compliance assessments' by detailing the classification system and filtering capability.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: to retrieve compliance assessments with optional filtering by sampling point ID. It distinguishes from sibling tools like 'ea-bw-list-samples-052' or 'ea-bw-list-sites-052' by focusing on compliance data. However, it does not explicitly state when NOT to use it or name specific alternatives, keeping it at a 4 rather than a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-bw-list-samples-052A
Read-onlyIdempotent
Inspect

List in-season bathing water quality sample results from the Environment Agency. Each sample contains E. coli and intestinal enterococci bacterial counts, sample date/time, bathing water site name and ID, and sampling point details. Samples are collected during the bathing season (May–September).

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (0-based)
page_sizeNoNumber of results per page (default 20, max 500)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating safe, repeatable operations. The description adds valuable context beyond this by specifying that samples are collected during a specific season (May–September), which is not covered by annotations and helps the agent understand data availability constraints. It does not mention rate limits or authentication needs, but with annotations covering safety, this is sufficient for a high score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by supporting details about sample content and temporal scope. Every sentence adds value without redundancy, and it is efficiently structured in two concise sentences, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a read-only list operation with pagination), the description provides good context on what data is returned (sample details) and when it's available (seasonal). However, there is no output schema, and the description does not specify the return format (e.g., JSON structure or pagination metadata), leaving a minor gap. With annotations covering safety and idempotency, it is mostly complete but could be enhanced with output details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for both parameters (page and page_size) in the input schema. The description does not add any parameter-specific information beyond what the schema provides, such as default behaviors or usage tips. This meets the baseline of 3, as the schema handles the parameter semantics adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List'), resource ('in-season bathing water quality sample results'), and scope ('from the Environment Agency'), with detailed content about what each sample contains. It distinguishes from siblings like 'ea-bw-list-sites-052' (which lists sites) and 'ea-bw-list-compliance-052' (which lists compliance data), making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool by specifying 'in-season' and 'during the bathing season (May–September)', which helps the agent understand the temporal scope. However, it does not explicitly mention when not to use it or name alternatives (e.g., for out-of-season data or other water quality tools), so it lacks full exclusion guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-bw-list-sites-052A
Read-onlyIdempotent
Inspect

List designated bathing water sites monitored by the Environment Agency in England. Returns site names, EU bathing water IDs, sampling point coordinates, latest compliance classification (Excellent/Good/Sufficient/Poor), latest risk prediction, district, and water company details. Use eubwidNotation from results for other tools.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (0-based)
page_sizeNoNumber of results per page (default 20, max 500)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety. The description adds valuable behavioral context: it specifies the exact data fields returned (site names, IDs, coordinates, classifications, etc.), mentions pagination parameters (implied by page/page_size), and notes the use of eubwidNotation for other tools. This goes beyond annotations by detailing output structure and integration guidance.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states purpose and detailed return fields, the second provides usage guidance. Every element adds value without redundancy, and it's front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity as a read-only listing operation with good annotations (readOnlyHint, idempotentHint) and no output schema, the description is complete. It covers purpose, return data fields, pagination context, and integration notes, providing sufficient context for an AI agent to use it effectively without needing output schema details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for 'page' and 'page_size' parameters. The description doesn't add any additional parameter semantics beyond what the schema provides, such as explaining pagination behavior or default usage. However, it implies pagination through context, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and resource 'designated bathing water sites monitored by the Environment Agency in England', making the purpose specific. It distinguishes from siblings like 'ea-bw-get-site-052' (get single site) and 'ea-bw-list-compliance-052' (list compliance data) by focusing on comprehensive site listings with specific fields.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: to retrieve bathing water site information with specific data fields. It explicitly mentions using 'eubwidNotation from results for other tools', guiding integration. However, it doesn't explicitly state when NOT to use it or name specific alternatives among siblings, though the distinction is implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-ecology-list-sites-01eA
Read-onlyIdempotent
Inspect

List Environment Agency ecology monitoring sites across England. Returns freshwater and marine survey sites with coordinates (lat/long), easting/northing, site labels, and site type. Filter by local_id to find a specific site. Use site_id URIs from results to query observations.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return
offsetNoOffset for pagination
local_idNoFilter by site local ID (numeric string, e.g. '1', '10', '500')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating a safe, repeatable read operation. The description adds useful context beyond annotations: it specifies the geographic scope ('England'), types of sites included ('freshwater and marine'), and that results include coordinates and labels. However, it does not mention behavioral details like rate limits, authentication needs, or pagination behavior, which would enhance transparency further.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by details on returns, filtering, and usage for queries. Every sentence adds value: the first defines the tool, the second specifies output fields, the third explains filtering, and the fourth guides next steps. There is no wasted text, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a list operation with filtering and pagination), annotations cover safety (read-only, idempotent), and schema covers parameters fully, the description is mostly complete. It explains what the tool does, what it returns, and how to use results. However, without an output schema, it could benefit from more detail on response structure (e.g., format of coordinates) or error handling, slightly reducing completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the parameters (limit, offset, local_id). The description adds minimal value beyond the schema: it mentions filtering by local_id to find a specific site, which aligns with the schema's description. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description does not significantly enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List'), resource ('Environment Agency ecology monitoring sites across England'), and scope ('freshwater and marine survey sites'). It distinguishes from sibling tools like 'ea-ecology-list-taxa-01e' (which lists taxa) and 'ea-ecology-observations-01e' (which queries observations), making the purpose unambiguous and well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: to list sites with coordinates and site details, filter by local_id, and obtain site_id URIs for querying observations via other tools. However, it does not explicitly state when not to use it or name specific alternatives, such as using other 'list-sites' tools for different data types (e.g., 'ea-bw-list-sites-052' for water quality sites).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-ecology-list-taxa-01eA
Read-onlyIdempotent
Inspect

List taxa from the Environment Agency's ecology taxonomy database. Returns species and genus records with scientific names, NBN taxon version keys, taxon groups (e.g. annelid, fish, insect), ranks, protected status, and non-native species flags. Results are ordered alphabetically. Use offset for pagination through the full taxonomy.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return
offsetNoOffset for pagination
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, and non-destructive behavior, but the description adds valuable context: it specifies the ordering (alphabetical), pagination method (offset), and the types of data returned (e.g., scientific names, NBN keys, taxon groups). This enhances understanding beyond the annotations without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by details on returned data and usage notes. Every sentence adds value—specifying data fields, ordering, and pagination—with no wasted words, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 optional parameters, no output schema), the description is mostly complete: it covers purpose, data returned, ordering, and pagination. However, it lacks details on error handling or response format, which could be useful despite the absence of an output schema, leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for 'limit' and 'offset' parameters. The description mentions offset for pagination, aligning with the schema but not adding significant extra meaning. Since the schema is comprehensive, the baseline score of 3 is appropriate as the description doesn't substantially enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List taxa') and resource ('Environment Agency's ecology taxonomy database'), specifying it returns species and genus records with detailed attributes. It distinguishes from sibling tools like 'ea-ecology-list-sites-01e' by focusing on taxonomy rather than sites, and from 'ea-ecology-observations-01e' by listing taxa instead of observations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool—for retrieving taxonomy data with pagination. However, it does not explicitly state when not to use it or name alternatives among siblings, such as 'ea-ecology-observations-01e' for observation data, leaving some guidance implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-ecology-observations-01eA
Read-onlyIdempotent
Inspect

Get ecology observations from the Environment Agency. At least one filter is required: site_id or observation_type. Returns biological survey results including taxon name, date, measured property, result value, and links to the survey and site. Use site_id URIs from list_sites and type URIs from observation_types.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return
offsetNoOffset for pagination
site_idNoFilter by site URI (e.g. 'http://environment.data.gov.uk/ecology/site/bio/1'). Get site URIs from the list_sites tool.
observation_typeNoFilter by observation type URI (e.g. 'http://environment.data.gov.uk/ecology/def/bio/RiverInvTaxaObservation'). Get type URIs from the observation_types tool.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond annotations by specifying the return format ('biological survey results including taxon name, date, measured property, result value, and links to the survey and site') and the filter requirement, enhancing behavioral understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states purpose and requirements, the second details return values and URI sources. Every sentence adds essential information with zero waste, making it front-loaded and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema), the description is complete. It covers purpose, usage rules, return format, and dependencies on sibling tools, compensating well for the lack of output schema. Annotations provide safety context, making this sufficient for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal value beyond the schema by reiterating the filter requirement and URI sources, but does not provide additional syntax or format details. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('ecology observations from the Environment Agency'), specifying the data source and content. It distinguishes from siblings by focusing on biological survey results, unlike other tools for carbon, flood, or water quality data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states when to use this tool ('At least one filter is required: site_id or observation_type') and provides alternatives by directing users to sibling tools for obtaining required URIs ('Use site_id URIs from list_sites and type URIs from observation_types').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-ecology-observation-types-01eA
Read-onlyIdempotent
Inspect

List all available ecology observation types from the Environment Agency. Returns 12 types covering river diatoms, macroinvertebrates, macrophytes, marine benthos, marine phytoplankton, and freshwater/TRaC fish surveys. Use the obs_type URIs as the observation_type filter when querying observations.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond this: it specifies the exact number of types returned (12) and lists the categories covered, which helps set expectations about the output format and scope.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose and followed by usage guidance. Every sentence adds value: the first defines what the tool does and what it returns, the second explains how to use the output. There is no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, rich annotations (read-only, idempotent, non-destructive), and no output schema, the description is complete. It clearly explains the purpose, output details (12 types with examples), and how to use the results with sibling tools, covering all necessary context without redundancy.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately focuses on the tool's purpose and output, not inputs, which aligns with the schema's emptiness. A baseline of 4 is given as it compensates well for the lack of parameters by detailing the output.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('all available ecology observation types from the Environment Agency'), with specific examples of what's included (e.g., river diatoms, macroinvertebrates). It distinguishes from sibling tools like 'ea-ecology-observations-01e' by focusing on types rather than actual observations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states when to use this tool: 'Use the obs_type URIs as the observation_type filter when querying observations.' This provides a direct alternative (the 'ea-ecology-observations-01e' sibling tool) and clarifies the relationship between listing types and filtering observations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-flood-areas-621A
Read-onlyIdempotent
Inspect

List flood warning and alert areas in England. Returns geographic areas with their labels, coordinates, counties, and river/sea names. Filter by county or search by area description.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results (default 50, max 500).
countyNoFilter by county name (e.g. 'Somerset', 'Devon').
offsetNoNumber of results to skip for pagination.
searchNoSearch term to filter flood areas by label or description.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, and non-destructive behavior, but the description adds useful context by specifying the geographic scope ('in England') and the types of data returned (labels, coordinates, etc.), which enhances understanding beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by additional details in a second sentence, with no wasted words. It efficiently conveys essential information in a compact form.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (list operation with filtering), rich annotations, and full schema coverage, the description is mostly complete. However, the lack of an output schema means it could benefit from more detail on return format or pagination behavior, though it's adequate for the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents all parameters. The description mentions filtering by county and search by area description, which aligns with the schema but does not add significant extra meaning beyond it, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List flood warning and alert areas in England') and resource ('geographic areas with their labels, coordinates, counties, and river/sea names'), distinguishing it from sibling tools like 'ea-flood-get-areas-930' or 'ea-flood-warnings-621' by focusing on area listing rather than retrieval or warnings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Filter by county or search by area description'), but it does not explicitly state when not to use it or name alternatives among siblings, such as 'ea-flood-get-areas-930' for detailed area retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-flood-get-areas-930A
Read-onlyIdempotent
Inspect

List Environment Agency flood warning and alert areas in England. Flood areas define the geographic regions to which warnings or alerts apply. Includes Flood Alert Areas (possible flooding) and Flood Warning Areas (expected flooding). Filter by geographic location or text search.

ParametersJSON Schema
NameRequiredDescriptionDefault
latNoLatitude for geo-filter (WGS84). Must be used with 'long' and 'dist'.
distNoRadius in km for geo-filter. Must be used with 'lat' and 'long'.
longNoLongitude for geo-filter (WGS84). Must be used with 'lat' and 'dist'.
limitNoMaximum number of results to return
offsetNoOffset for pagination
searchNoText search in flood area labels
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds useful context about what types of flood areas are included (Alert vs. Warning areas), but doesn't provide additional behavioral details like rate limits, authentication needs, or pagination behavior beyond what's in the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: first states the core purpose, second explains what flood areas are, third describes filtering options. Every sentence earns its place with no wasted words, making it front-loaded and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only list tool with good annotations and full schema coverage, the description provides adequate context. It explains what flood areas are and the filtering options, though without an output schema, it doesn't describe the return format or structure. Given the tool's relative simplicity and comprehensive annotations, it's mostly complete but could benefit from mentioning response format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, all parameters are well-documented in the schema itself. The description mentions filtering capabilities ('Filter by geographic location or text search') which aligns with the lat/long/dist and search parameters, but doesn't add significant semantic value beyond what the schema already provides. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List'), resource ('Environment Agency flood warning and alert areas in England'), and scope ('Flood Alert Areas' and 'Flood Warning Areas'). It distinguishes from siblings like 'ea-flood-get-warnings-930' by focusing on geographic areas rather than warnings themselves, providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Filter by geographic location or text search'), but does not explicitly mention when not to use it or name specific alternatives among the many sibling tools. It implies usage for area-based queries rather than warning details, though not with explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-flood-get-station-930A
Read-onlyIdempotent
Inspect

Get detailed information about a specific EA flood monitoring station by its ID. Returns station metadata including location, river name, catchment, available measures (water level, flow, etc.), scale information, and latest readings.

ParametersJSON Schema
NameRequiredDescriptionDefault
station_idYesStation reference ID (e.g. '1029TH', '1491TH')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context about what information is returned (metadata, measures, scale, latest readings), which isn't covered by annotations. No contradictions exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states the purpose and parameter, the second details the return content. It's front-loaded with the core function and appropriately sized for a simple lookup tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only, idempotent tool with one well-documented parameter and no output schema, the description is mostly complete. It covers purpose, parameter context, and return content. A minor gap is lack of explicit guidance on when to use vs. sibling tools, but annotations handle safety aspects well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'station_id' fully documented in the schema. The description mentions 'by its ID' but doesn't add syntax or format details beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get detailed information') and resource ('a specific EA flood monitoring station by its ID'), and distinguishes it from siblings like 'ea-flood-list-stations-930' (which lists stations) and 'ea-flood-latest-readings-930' (which focuses on readings). It specifies the exact scope of information returned.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'by its ID' and listing the returned metadata, which helps differentiate it from list-based tools. However, it doesn't explicitly state when NOT to use it or name alternatives like 'ea-flood-station-readings-930' for readings-focused queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-flood-get-warnings-930A
Read-onlyIdempotent
Inspect

Get current flood warnings and alerts from the Environment Agency for England. Returns active warnings with severity levels, affected areas, and situation messages. Filter by severity (1=Severe, 2=Warning, 3=Alert), county, or geographic location.

ParametersJSON Schema
NameRequiredDescriptionDefault
latNoLatitude for geo-filter (WGS84). Must be used with 'long' and 'dist'.
distNoRadius in km for geo-filter. Must be used with 'lat' and 'long'.
longNoLongitude for geo-filter (WGS84). Must be used with 'lat' and 'dist'.
limitNoMaximum number of results to return
countyNoFilter by county name (e.g. 'Somerset'). Comma-separated for multiple counties.
min_severityNoMinimum severity level (1=Severe Flood Warning, 2=Flood Warning, 3=Flood Alert, 4=Warning no longer in force)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, idempotent, non-destructive operation. The description adds valuable context beyond annotations by specifying it returns 'current' and 'active' warnings, mentions the data source (Environment Agency for England), and describes the content of returned data (severity levels, affected areas, situation messages). No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the core purpose and return data, the second explains filtering options. Every word serves a purpose with no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only query tool with comprehensive annotations and full parameter documentation, the description provides good contextual completeness. It explains what data is returned and filtering options. The main gap is the lack of output schema, but the description partially compensates by describing return content. It could benefit from mentioning pagination or result limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all 6 parameters. The description adds some semantic context by mentioning filtering by 'severity, county, or geographic location' and providing severity level mapping (1=Severe, 2=Warning, 3=Alert), but doesn't add significant information beyond what's already in the parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get current flood warnings and alerts'), identifies the resource ('from the Environment Agency for England'), and distinguishes it from siblings by focusing on warnings/alerts rather than areas, stations, or readings. It provides concrete details about what's returned (active warnings with severity levels, affected areas, situation messages).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to get flood warnings/alerts with filtering capabilities). It doesn't explicitly mention when not to use it or name specific alternatives among the many sibling tools, though the focus on warnings/alerts implies it's not for areas, stations, or readings data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-flood-latest-readings-930A
Read-onlyIdempotent
Inspect

Get the latest readings across all EA flood monitoring stations in England. Returns the most recent water level, flow, or rainfall values from all active measurement stations. Filter by parameter type or specific station. Efficient alternative to polling individual stations.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of readings to return
offsetNoOffset for pagination
parameterNoFilter by measurement parameter (e.g. 'level', 'flow', 'rainfall')
qualifierNoFilter by qualifier (e.g. 'Stage', 'Downstream Stage', 'Tidal Level')
station_referenceNoFilter by station reference ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds useful context about efficiency ('efficient alternative to polling individual stations') and scope ('across all EA flood monitoring stations in England'), but doesn't disclose rate limits, authentication needs, or detailed behavioral traits beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: first states the core purpose, second elaborates on return values and filtering, third provides usage context. Every sentence adds value with zero waste, and it's appropriately front-loaded with the main functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, no output schema), the description provides good context about what the tool does and when to use it. With annotations covering safety aspects and schema covering parameters, the description focuses appropriately on purpose and efficiency. However, without an output schema, some additional detail about return format could be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all parameters are well-documented in the schema itself. The description mentions filtering by parameter type or specific station, which aligns with the 'parameter' and 'station_reference' parameters, but doesn't add significant semantic value beyond what the schema already provides. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'latest readings across all EA flood monitoring stations in England', specifying the scope (all stations, England) and return values (water level, flow, rainfall). It distinguishes from siblings like 'ea-flood-station-readings-930' by focusing on latest readings across all stations rather than historical data for specific stations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: to get the most recent values from all active measurement stations efficiently. It mentions filtering capabilities and positions it as an 'efficient alternative to polling individual stations', which implicitly suggests when to use it versus querying stations one by one. However, it doesn't explicitly name alternative tools or state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-flood-list-stations-930A
Read-onlyIdempotent
Inspect

List Environment Agency flood monitoring stations across England. Filter by measurement type (level, flow, rainfall), location, river, catchment, town, or geographic radius. Returns station metadata including coordinates, river names, and available measurement types.

ParametersJSON Schema
NameRequiredDescriptionDefault
latNoLatitude for geo-filter (WGS84). Must be used with 'long' and 'dist'.
distNoRadius in km for geo-filter. Must be used with 'lat' and 'long'.
longNoLongitude for geo-filter (WGS84). Must be used with 'lat' and 'dist'.
townNoFilter by town name
limitNoMaximum number of results to return
offsetNoOffset for pagination
searchNoText search in station labels
statusNoFilter by station status
parameterNoFilter by measurement parameter (e.g. 'level', 'flow', 'rainfall', 'temperature', 'wind')
qualifierNoFilter by qualifier (e.g. 'Stage', 'Downstream Stage', 'Tidal Level', 'Groundwater')
river_nameNoFilter by river name (e.g. 'River Thames')
station_typeNoFilter by station type: SingleLevel, MultiTraceLevel, Coastal, Groundwater, or Meteorological
catchment_nameNoFilter by catchment name (e.g. 'Cotswolds')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, establishing this as a safe, repeatable read operation. The description adds valuable behavioral context beyond annotations by specifying the geographic scope (England only), describing the return format (station metadata including coordinates, river names, measurement types), and indicating filtering capabilities. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences efficiently convey the tool's purpose, filtering capabilities, and return format. The first sentence establishes scope and filtering options, while the second describes the metadata returned. Every word serves a purpose with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only listing tool with excellent annotations (readOnlyHint, idempotentHint) and comprehensive parameter documentation (100% schema coverage), the description provides sufficient context. It explains what's returned (metadata including coordinates, river names, measurement types) despite no output schema, and establishes geographic scope. The main gap is lack of explicit sibling differentiation, but overall completeness is strong.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, all 13 parameters are well-documented in the input schema. The description adds marginal value by grouping parameters conceptually (e.g., 'measurement type' maps to 'parameter', 'geographic radius' maps to lat/long/dist group) and mentioning filtering categories, but doesn't provide additional syntax, format, or constraint details beyond what the schema already specifies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and resource 'Environment Agency flood monitoring stations across England', specifying geographic scope. It distinguishes from siblings like 'ea-flood-get-station-930' (detail retrieval) and 'ea-flood-station-readings-930' (measurement data) by focusing on metadata listing with filtering capabilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage by enumerating filterable attributes (measurement type, location, river, etc.), but doesn't explicitly state when to use this tool versus alternatives like 'ea-flood-get-station-930' for single station details or 'ea-flood-station-readings-930' for actual measurements. It implies usage through the filtering capabilities described.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-flood-station-readings-930A
Read-onlyIdempotent
Inspect

Get water level and flow readings from a specific EA monitoring station. Retrieve the latest values, today's readings, readings for a specific date, or a date range. Data is typically recorded every 15 minutes.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoReturn readings for a specific date (YYYY-MM-DD format)
limitNoMaximum number of readings to return
sinceNoReturn readings since this datetime (ISO 8601 format, e.g. '2024-01-01T00:00:00Z')
todayNoIf true, return only today's readings
latestNoIf true, return only the latest reading for each measure at this station
sortedNoIf true, sort readings by date-time (most recent first)
end_dateNoEnd date for a date range (YYYY-MM-DD). Must be used with start_date.
parameterNoFilter by measurement parameter (e.g. 'level', 'flow')
start_dateNoStart date for a date range (YYYY-MM-DD). Must be used with end_date.
station_idYesStation reference ID (e.g. '1029TH', '1491TH')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds valuable behavioral context beyond annotations by specifying the data recording frequency (every 15 minutes), which helps the agent understand data granularity and freshness. It doesn't describe rate limits or authentication needs, but with good annotation coverage, this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that front-load the core purpose and then add useful context about data frequency. Every word earns its place—no redundancy or fluff. It efficiently communicates the tool's capabilities without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (10 parameters, read-only operation) and rich schema/annotations (100% coverage, readOnlyHint, idempotentHint), the description is mostly complete. It adds context on data frequency and time range options. However, without an output schema, it doesn't describe the return format (e.g., structure of readings), which is a minor gap for a data retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all 10 parameters are well-documented in the schema. The description adds some context by mentioning 'latest values, today's readings, readings for a specific date, or a date range,' which maps to parameters like 'latest,' 'today,' 'date,' 'start_date,' and 'end_date.' However, it doesn't provide additional syntax or format details beyond what the schema already specifies, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get water level and flow readings'), resource ('from a specific EA monitoring station'), and scope ('latest values, today's readings, readings for a specific date, or a date range'). It distinguishes this tool from sibling tools like 'ea-flood-latest-readings-930' by specifying it can retrieve various time ranges, not just latest readings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool by specifying the types of readings available (latest, today's, specific date, date range) and the data recording frequency (every 15 minutes). However, it doesn't explicitly state when not to use it or name specific alternatives among the sibling tools, such as 'ea-flood-latest-readings-930' for only latest readings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-flood-warning-detail-621A
Read-onlyIdempotent
Inspect

Get detailed information about a specific flood warning or flood area by its area ID. Returns the full warning message, severity, area description, county, river or sea name, and geographic coordinates.

ParametersJSON Schema
NameRequiredDescriptionDefault
flood_area_idYesFlood area notation/ID (e.g. '111FAGWDGW', '053FWFPUWI06'). Obtain this from the ea_flood_warnings or ea_flood_areas tools.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds value by specifying the return content ('full warning message, severity, area description, county, river or sea name, and geographic coordinates'), which is useful behavioral context not covered by annotations. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys purpose, usage, and return details without redundancy. It is front-loaded with the core action and resource, making it highly concise and effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 required parameter), rich annotations, and 100% schema coverage, the description is nearly complete. It lacks an output schema, but compensates by listing return fields. The main gap is no explicit error handling or rate limit info, but annotations cover key behavioral traits, making it sufficient for most use cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'flood_area_id' fully documented in the input schema (including examples and source tools). The description does not add any additional parameter semantics beyond what the schema provides, so it meets the baseline score of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get detailed information'), resource ('about a specific flood warning or flood area'), and key differentiator ('by its area ID'). It distinguishes from sibling tools like 'ea-flood-warnings-621' (which likely lists warnings) and 'ea-flood-areas-621' (which likely lists areas) by focusing on detail retrieval for a single identified entity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly provides usage context by specifying that the flood_area_id should be 'Obtained from the ea_flood_warnings or ea_flood_areas tools' (as noted in the input schema), which helps guide when to use this tool after those siblings. However, it does not explicitly state when not to use it or name direct alternatives, keeping it from a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-flood-warnings-621A
Read-onlyIdempotent
Inspect

Get active flood warnings and alerts for England from the Environment Agency. Returns current warnings with severity levels, affected areas, and warning messages. Severity: 1=Severe (danger to life), 2=Warning (flooding expected), 3=Alert (flooding possible), 4=No longer in force.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results (default 50, max 500).
countyNoFilter by county name (e.g. 'Somerset', 'Devon').
severityNoFilter by severity level: 1=Severe Flood Warning, 2=Flood Warning, 3=Flood Alert, 4=No longer in force.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context by explaining severity levels (1-4 with meanings) and specifying that it returns 'current warnings,' which clarifies the data's timeliness. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by essential details about returns and severity levels in a structured manner. Every sentence adds value without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema), the description is mostly complete. It covers purpose, returns, and severity context, but lacks details on response format (e.g., pagination, error handling) and explicit guidance on tool selection among siblings. Annotations provide safety context, compensating somewhat.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter well-documented in the input schema (limit, county, severity). The description adds minimal semantic value beyond the schema, only implicitly referencing severity levels without detailing parameter interactions. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('Get'), resource ('active flood warnings and alerts for England from the Environment Agency'), and scope ('current warnings with severity levels, affected areas, and warning messages'). It distinguishes itself from sibling tools like 'ea-flood-warning-detail-621' by focusing on active warnings rather than detailed information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'active flood warnings and alerts for England' and listing severity levels, but it does not explicitly state when to use this tool versus alternatives like 'ea-flood-warning-detail-621' or other flood-related tools. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-get-water-body-8a7A
Read-onlyIdempotent
Inspect

Get full details of an EA monitoring station by ID. Returns location, river name, catchment area, grid reference, and all available measurement time series (flow, level, groundwater) with their units and periods.

ParametersJSON Schema
NameRequiredDescriptionDefault
station_idYesStation UUID from search results (e.g. '052d0819-2a32-47df-9b99-c243c9c8235b').
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond annotations: it specifies what data is returned (location, river name, catchment area, grid reference, measurement time series with units and periods), which helps the agent understand the output structure since there's no output schema. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose and return data. It is front-loaded with the main action and resource, followed by specific return details, with zero wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (fetching detailed station data with time series), annotations cover safety aspects, but there is no output schema. The description compensates by listing the return fields (location, river name, etc.) and data types (time series with units and periods), making it fairly complete. However, it could mention pagination or error handling for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'station_id' fully documented in the schema (UUID format and example). The description does not add any additional parameter semantics beyond what the schema provides, such as format constraints or usage notes. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get full details' and the resource 'EA monitoring station by ID', specifying it returns location, river name, catchment area, grid reference, and measurement time series. It distinguishes from sibling tools like 'ea-search-water-bodies-8a7' (search) and 'ea-hydrology-station-detail-334' (different data type) by focusing on comprehensive water body details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when station ID is known (from search results), but does not explicitly state when to use this tool versus alternatives like 'ea-hydrology-station-detail-334' or 'ea-gw-station-detail-993'. It provides some context (ID from search) but lacks explicit when/when-not guidance or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-gw-readings-993A
Read-onlyIdempotent
Inspect

Get groundwater level readings from a specific EA monitoring measure. Returns time-series data with date, value (metres above ordnance datum), and quality flags. Use measure_id from station detail results. Filter by date or 'since' to limit the time range.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoReturn readings for a specific date (YYYY-MM-DD format)
limitNoMaximum number of readings to return
sinceNoReturn readings since this datetime (ISO 8601 format, e.g. '2026-01-01T00:00:00Z')
offsetNoOffset for pagination
measure_idYesMeasure ID from station detail results (e.g. 'c7e13884-...-gw-logged-i-subdaily-mAOD-qualified')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety aspects. The description adds valuable behavioral context beyond annotations: it specifies the return format (time-series with specific fields), mentions quality flags, and indicates filtering capabilities. It doesn't contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: first states purpose and output format, second provides usage guidance and filtering options. Every sentence adds value with zero wasted words, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, no output schema), annotations provide safety context, and schema has full description coverage. The description adds sufficient context about return format and usage guidance. It could slightly improve by mentioning pagination behavior given the 'limit' and 'offset' parameters, but overall it's quite complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 5 parameters. The description adds minimal parameter semantics beyond the schema, mentioning 'measure_id from station detail results' and 'Filter by date or 'since'' which are already covered in schema descriptions. Baseline 3 is appropriate when schema does heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get groundwater level readings'), resource ('from a specific EA monitoring measure'), and output format ('time-series data with date, value, and quality flags'). It distinguishes from sibling tools like 'ea-gw-station-detail-993' by focusing on readings rather than station metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Use measure_id from station detail results') and mentions filtering options. However, it doesn't explicitly state when NOT to use it or name specific alternative tools for different groundwater-related tasks, though the sibling list suggests related tools exist.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-gw-station-detail-993A
Read-onlyIdempotent
Inspect

Get detailed information about a specific EA groundwater monitoring station. Returns station metadata including location, aquifer, borehole depth, grid reference, available measures (logged and dipped), and date opened.

ParametersJSON Schema
NameRequiredDescriptionDefault
station_idYesStation notation ID (e.g. 'c7e13884-4a02-4df3-b184-09aea28cf8e8_3_020'). Get from station list results.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating a safe, repeatable read operation. The description adds valuable context beyond annotations by detailing the return content (metadata including location, aquifer, depth, measures, etc.) and specifying that it returns 'detailed information,' which helps the agent understand the data richness. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the core purpose ('Get detailed information...') and efficiently lists the return metadata. Every element serves a purpose, with no redundant or wasted words, making it highly concise and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema), annotations cover safety and idempotency, and the description details the return content, it is mostly complete. However, without an output schema, the description could benefit from mentioning the structure or format of the returned metadata (e.g., JSON object), though the listed fields provide good guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'station_id' fully documented in the schema (type, description, example). The description does not add any additional meaning or clarification about the parameter beyond what the schema provides, such as format constraints or sourcing details not already covered. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get detailed information'), resource ('EA groundwater monitoring station'), and scope ('specific'). It distinguishes from siblings like 'ea-gw-stations-993' (list stations) and 'ea-gw-readings-993' (get readings) by focusing on metadata details rather than listings or time-series data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'specific EA groundwater monitoring station' and referencing 'station_id' from 'station list results' (likely 'ea-gw-stations-993'). However, it does not explicitly state when to use this tool versus alternatives like 'ea-hydrology-station-detail-334' or 'ea-rainfall-station-detail-92b' for other station types, nor does it mention exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-gw-stations-993A
Read-onlyIdempotent
Inspect

List Environment Agency groundwater monitoring stations (boreholes) in England. Returns station locations, aquifer names, borehole depths, and available measures. Filter by aquifer name, geographic location, or status.

ParametersJSON Schema
NameRequiredDescriptionDefault
latNoLatitude for geo-filter (WGS84). Must be used with 'long' and 'dist'.
distNoRadius in km for geo-filter. Must be used with 'lat' and 'long'.
longNoLongitude for geo-filter (WGS84). Must be used with 'lat' and 'dist'.
limitNoMaximum number of results to return
offsetNoOffset for pagination
statusNoFilter by station status
aquiferNoFilter by aquifer name (e.g. 'Chalk', 'Lincolnshire Limestone', 'Greensand')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds useful context about what data is returned (locations, aquifer names, borehole depths, available measures) and filtering capabilities, but doesn't mention pagination behavior or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences: the first states purpose and return data, the second explains filtering options. Every word earns its place, and the information is front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only listing tool with excellent annotations (readOnlyHint, idempotentHint) and 100% schema coverage, the description provides adequate context about what data is returned and filtering options. The main gap is the lack of output schema, so the description doesn't detail the structure of returned results, but this is partially compensated by mentioning what fields are included.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all parameters are well-documented in the schema. The description mentions filtering by 'aquifer name, geographic location, or status' which maps to the aquifer, lat/long/dist, and status parameters, but doesn't add significant semantic value beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List'), resource ('Environment Agency groundwater monitoring stations'), and scope ('in England'), with details about what information is returned. It distinguishes from sibling tools like 'ea-gw-station-detail-993' (which gets detailed info for a specific station) and 'ea-gw-readings-993' (which gets readings data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Filter by aquifer name, geographic location, or status'), but doesn't explicitly state when NOT to use it or mention specific alternatives among the sibling tools. The filtering guidance helps differentiate it from other listing tools in the server.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-hydrology-readings-334A
Read-onlyIdempotent
Inspect

Fetch time series readings for a specific hydrology measure. Returns date, value, quality, and completeness for each reading. Get the measure_id from the station detail tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of readings to return (default 50, max 500).
measure_idYesMeasure notation string identifying the time series (e.g. '052d0819-...-flow-m-86400-m3s-qualified'). Obtain from station detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds value by specifying the return fields (date, value, quality, completeness) and the need for measure_id from another tool, but doesn't cover rate limits, error handling, or pagination behavior beyond the schema's limit parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by output details and a usage note. Every sentence adds value without redundancy, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 parameters, no nested objects), high schema coverage (100%), and annotations covering safety, the description is mostly complete. It specifies output fields and a prerequisite, but lacks details on error cases or response format, which could be helpful since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for both parameters (measure_id and limit). The description adds minimal semantic context by noting that measure_id should be obtained from 'station detail tool,' but doesn't provide additional syntax or format details beyond what the schema already specifies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Fetch time series readings'), resource ('for a specific hydrology measure'), and output details ('Returns date, value, quality, and completeness for each reading'). It distinguishes from siblings like 'ea-hydrology-station-detail-334' by focusing on readings rather than station metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by specifying 'Get the measure_id from the station detail tool,' indicating a prerequisite. However, it doesn't explicitly mention when not to use this tool or name alternatives among siblings, such as 'ea-flood-station-readings-930' for flood data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-hydrology-station-detail-334A
Read-onlyIdempotent
Inspect

Get detailed information about a specific EA hydrology monitoring station including its location, river, catchment, and all available measurement time series (measures). Use the measure notation to fetch readings.

ParametersJSON Schema
NameRequiredDescriptionDefault
station_idYesStation UUID identifier (e.g. '052d0819-2a32-47df-9b99-c243c9c8235b').
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating safe, repeatable read operations. The description adds valuable context beyond this: it specifies the type of information returned (location, river, catchment, measurement time series) and hints at behavioral aspects like the need to use 'measure notation' for fetching readings, which isn't covered by annotations. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose, resources, and a usage hint. It's front-loaded with the main action and avoids unnecessary words. Every part of the sentence adds value, making it appropriately concise for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema), the description is largely complete. It covers the purpose, resources, and a behavioral hint. However, it lacks explicit guidance on when to use versus siblings and doesn't detail output structure, which is acceptable since there's no output schema. With good annotations and schema coverage, the description provides sufficient context for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'station_id' fully documented in the schema as a UUID identifier. The description doesn't add any parameter-specific details beyond what the schema provides, such as format examples or constraints. Since schema coverage is high, the baseline score of 3 is appropriate, as the description doesn't compensate but doesn't need to given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get detailed information about a specific EA hydrology monitoring station' with specific resources listed (location, river, catchment, measurement time series). It distinguishes from sibling tools like 'ea-hydrology-stations-334' (list) and 'ea-hydrology-readings-334' (readings only), but doesn't explicitly name these alternatives. The verb 'Get' is specific, and the resource scope is well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'detailed information about a specific EA hydrology monitoring station' and mentions using 'measure notation to fetch readings,' suggesting this tool provides metadata rather than raw data. However, it doesn't explicitly state when to use this tool versus alternatives like 'ea-hydrology-readings-334' for readings or 'ea-hydrology-stations-334' for listing stations. The guidance is present but not explicit about exclusions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-hydrology-stations-334A
Read-onlyIdempotent
Inspect

Search Environment Agency hydrological monitoring stations in England. Returns station details including name, coordinates, river, catchment, and available measures (flow, level, groundwater). Use the station ID to get detailed info or the measure notation to fetch readings.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of stations to return (default 10, max 500).
offsetNoPagination offset (default 0).
searchNoFree-text search for station name (e.g. 'Thames', 'Severn').
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds useful behavioral context by specifying the return content (station details including name, coordinates, etc.) and mentioning pagination implicitly through parameters, though it doesn't explicitly state rate limits or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by return details and usage guidance, all in two efficient sentences with no wasted words, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with pagination), rich annotations (readOnly, idempotent, non-destructive), and full schema coverage, the description is mostly complete. It covers purpose, returns, and usage guidelines, but lacks output schema details (e.g., response format), which is a minor gap as there's no output schema provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters (limit, offset, search). The description does not add any parameter-specific semantics beyond what the schema provides, maintaining the baseline score of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Search') and resource ('Environment Agency hydrological monitoring stations in England'), and distinguishes it from sibling tools by mentioning related tools for detailed info (ea-hydrology-station-detail-334) and readings (ea-hydrology-readings-334).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly provides usage guidance by stating when to use this tool (for searching stations) and when to use alternatives (use station ID for detailed info or measure notation for readings), effectively differentiating it from sibling tools like ea-hydrology-station-detail-334 and ea-hydrology-readings-334.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-list-catchments-8a7A
Read-onlyIdempotent
Inspect

Discover rivers and monitoring stations near a geographic location. Provide latitude and longitude to find all EA monitoring stations within a radius, grouped by river name. Returns station counts, measurement types, and locations for each river in the area.

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude (WGS84) of the centre point (e.g. 51.5074 for London).
distNoSearch radius in kilometres (default 10, max 100).
longYesLongitude (WGS84) of the centre point (e.g. -0.1278 for London).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds valuable behavioral context beyond annotations by specifying the grouping logic ('grouped by river name'), output structure ('Returns station counts, measurement types, and locations for each river'), and geographic scope ('within a radius'), which helps the agent understand what to expect.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by input requirements and output details in two additional sentences. Every sentence adds essential information with zero waste, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (geographic search with grouping), rich annotations covering safety and idempotency, and 100% schema coverage, the description is nearly complete. It explains the purpose, input context, and output structure well. The main gap is the lack of an output schema, but the description compensates by detailing what's returned ('station counts, measurement types, and locations for each river').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing complete documentation for all three parameters (lat, long, dist). The description adds minimal semantic value beyond the schema, only implying the radius parameter through 'within a radius' without details on defaults or constraints, which are already in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Discover rivers and monitoring stations near a geographic location'), identifies the resource ('EA monitoring stations'), and distinguishes from siblings by focusing on geographic proximity grouping by river name, unlike other tools that handle flood warnings, water quality, or specific station details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Provide latitude and longitude to find all EA monitoring stations within a radius'), but doesn't explicitly mention when not to use it or name specific alternatives among the many sibling tools, though the geographic focus implies differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-rainfall-readings-92bB
Read-onlyIdempotent
Inspect

Get time-series rainfall readings for a specific Environment Agency station (GET .../id/stations/{id}/readings). Supports sorted results and a result limit.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of readings to return
sortedNoIf true, request sorted readings (API _sorted)
station_idYesStation ID from the API
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds some behavioral context by mentioning support for sorted results and a result limit, but doesn't cover aspects like rate limits, authentication needs, or what happens if station_id is invalid. With annotations covering the safety profile, a 3 is appropriate as the description adds moderate value beyond them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and includes additional features. It avoids unnecessary words, but could be slightly improved by structuring it more clearly, such as separating the main action from the optional features, though it remains concise overall.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity is moderate (3 parameters, no output schema), the description covers the basic purpose and some features. However, it lacks details on output format, error handling, or examples, which could help the agent use it more effectively. With annotations providing safety info, it's adequate but has clear gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning all parameters are documented in the schema. The description mentions sorted results and a result limit, which aligns with the 'sorted' and 'limit' parameters, but doesn't add significant meaning beyond what the schema provides, such as explaining the sorting order or typical limit ranges. Baseline 3 is correct when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get time-series rainfall readings') and resource ('for a specific Environment Agency station'), which is specific and actionable. However, it doesn't explicitly differentiate from sibling tools like 'ea-rainfall-station-detail-92b' or 'ea-rainfall-stations-92b', which likely provide different types of rainfall data, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions sorted results and a limit, but doesn't specify scenarios where this is preferred over other rainfall or flood-related tools in the sibling list, such as 'ea-flood-station-readings-930' or 'ea-rainfall-stations-92b', leaving the agent without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-rainfall-station-detail-92bA
Read-onlyIdempotent
Inspect

Get detailed metadata for a single Environment Agency rainfall monitoring station by ID, including label, coordinates, river and catchment, status, and available measures.

ParametersJSON Schema
NameRequiredDescriptionDefault
station_idYesStation ID from the API (e.g. notation/stationReference from list results)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, and non-destructive behavior, but the description adds valuable context by specifying the type of metadata returned (e.g., label, coordinates, river and catchment, status, available measures). This enhances understanding beyond the annotations, though it could mention response format or limitations like rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose, resource, and key metadata fields. It is front-loaded with the main action and includes no redundant information, making it highly concise and effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, 100% schema coverage, annotations provided), the description is mostly complete. It specifies the metadata returned, which compensates for the lack of an output schema. However, it could briefly mention response structure or error handling for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'station_id' fully documented in the schema. The description does not add extra meaning or details beyond the schema's description (e.g., format examples or constraints), so it meets the baseline of 3 without compensating for gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get detailed metadata'), resource ('Environment Agency rainfall monitoring station'), and scope ('by ID'). It distinguishes itself from sibling tools like 'ea-rainfall-stations-92b' (which likely lists stations) by focusing on detailed metadata for a single station, providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'by ID' and listing included metadata fields (e.g., label, coordinates, status), which helps identify when to use this tool. However, it does not explicitly state when not to use it or name alternatives (e.g., 'ea-rainfall-stations-92b' for listing stations), missing full explicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-rainfall-stations-92bA
Read-onlyIdempotent
Inspect

List Environment Agency flood monitoring stations that measure rainfall (parameter=rainfall). Returns station metadata including labels, coordinates, and references. Uses the open Real Time flood-monitoring API (rainfall stations subset).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of stations to return
offsetNoOffset for pagination
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating a safe, repeatable read operation. The description adds valuable context beyond this by specifying the data source (open Real Time flood-monitoring API) and the subset (rainfall stations), which helps the agent understand the tool's scope and limitations. It does not contradict annotations, and while it doesn't detail rate limits or auth needs, the annotations cover the core behavioral traits adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and efficiently structured in two sentences: the first states the action and resource, and the second adds context about returns and API source. Every sentence earns its place by providing essential information without redundancy, making it concise and easy to parse for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 optional parameters, no output schema), the description is largely complete. It covers purpose, data source, and return types adequately. However, it could be slightly improved by mentioning the lack of filtering options (e.g., by location) or linking to sibling tools for more specific queries, which would make it a 5. The annotations provide good safety context, so overall it's sufficient but not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters (limit and offset) well-documented in the schema. The description does not add any parameter-specific information beyond what the schema provides, such as default values or usage examples. Since the schema handles the heavy lifting, a baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't need to given the high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('List') and resources ('Environment Agency flood monitoring stations that measure rainfall'), and distinguishes it from siblings by specifying the parameter=rainfall subset. It explicitly mentions the API source (Real Time flood-monitoring API rainfall stations subset), which helps differentiate it from other flood-related tools in the sibling list like 'ea-flood-list-stations-930' or 'ea-hydrology-stations-334'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool by specifying it's for rainfall stations (parameter=rainfall) and listing what it returns (station metadata including labels, coordinates, and references). However, it does not explicitly state when not to use it or name alternatives among the many sibling tools, such as for other station types like flood or hydrology stations, which would have made it a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-search-water-bodies-8a7A
Read-onlyIdempotent
Inspect

Search Environment Agency monitoring stations by river name or measurement type. Find all monitoring points along a specific river (e.g. River Thames), or discover all stations measuring flow, level, groundwater, or rainfall. Returns station locations, catchment areas, and available measures.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of stations to return (default 20, max 500).
offsetNoPagination offset (default 0).
river_nameNoExact river name to filter by (e.g. 'River Thames', 'River Severn', 'Wye'). Case-sensitive.
observed_propertyNoFilter by measurement type. Values: 'waterFlow', 'waterLevel', 'groundwaterLevel', 'rainfall'.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, non-destructive read operation. The description adds context by specifying what data is returned (station locations, catchment areas, available measures), which is useful beyond annotations. However, it does not disclose behavioral traits like rate limits, authentication needs, or pagination details (implied by parameters but not described), leaving some gaps in full transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with two sentences that efficiently convey the tool's purpose and return values. Every sentence earns its place: the first explains the search functionality, and the second specifies the output. There is no wasted verbiage, making it highly concise and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema) and rich annotations (readOnly, idempotent, non-destructive), the description is mostly complete. It covers the tool's purpose, usage context, and return data. However, without an output schema, it could benefit from more detail on the response format (e.g., structure of returned data), but the annotations provide safety assurances, making it adequate for most use cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter well-documented in the input schema (e.g., limit, offset, river_name, observed_property). The description adds minimal semantic value beyond the schema, as it only mentions filtering by river name or measurement type without detailing parameter interactions or usage examples. Given the high schema coverage, a baseline score of 3 is appropriate, as the description does not significantly enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('search', 'find', 'discover') and resources ('Environment Agency monitoring stations'), and distinguishes it from siblings by focusing on river name or measurement type searches rather than flood warnings, water quality, or other environmental data. It explicitly mentions what the tool returns (station locations, catchment areas, and available measures), making its scope unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool: to search by river name (e.g., 'River Thames') or measurement type (e.g., flow, level). It implies usage for discovery or filtering purposes. However, it does not explicitly state when not to use it or name specific alternatives among the many sibling tools, such as 'ea-get-water-body-8a7' or 'ea-list-catchments-8a7', which might have overlapping functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-wq-determinands-54aA
Read-onlyIdempotent
Inspect

List water quality determinands (measurement types) from the EA codelist. Determinands define what is being measured — e.g. pH, dissolved oxygen, ammonia, heavy metals, pesticides, biological indicators. Returns notation codes and labels that can be used to filter observations.

ParametersJSON Schema
NameRequiredDescriptionDefault
skipNoNumber of records to skip for pagination
limitNoMaximum number of results to return (max 250)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds value by specifying the source ('EA codelist') and the return format ('notation codes and labels'), but does not disclose rate limits, authentication needs, or pagination behavior beyond what the schema implies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by examples and usage context in two efficient sentences. Every sentence adds necessary information without redundancy, making it appropriately sized and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (list operation with pagination), rich annotations, and no output schema, the description is mostly complete. It covers purpose, examples, and usage context, but could improve by mentioning pagination behavior or typical result structure to fully compensate for the missing output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for 'skip' and 'limit' parameters. The description does not add any parameter-specific semantics beyond what the schema provides, such as typical usage patterns or constraints, so it meets the baseline for high schema coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('water quality determinands from the EA codelist'), provides concrete examples (pH, dissolved oxygen, etc.), and distinguishes its purpose from siblings like 'ea-wq-observations-54a' by specifying it returns codes/labels for filtering observations rather than the observations themselves.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating the returned codes/labels 'can be used to filter observations,' which suggests this tool is preparatory for filtering operations. However, it does not explicitly name when-not-to-use alternatives or compare with other determinand-related tools, leaving some ambiguity about sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-wq-observations-54aA
Read-onlyIdempotent
Inspect

List water quality observations (measurements) for a specific sampling point. Each observation includes the determinand measured (e.g. pH, dissolved oxygen, ammonia), the result value and unit, sample date, and compliance status. Filter by determinand, date range, material type, or compliance status.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoFilter by exact date (YYYY-MM-DD). Cannot be used with date_from/date_to.
skipNoNumber of records to skip for pagination
limitNoMaximum number of results to return (max 250)
date_toNoFilter by date range end (YYYY-MM-DD). Must be used with date_from.
date_fromNoFilter by date range start (YYYY-MM-DD). Must be used with date_to.
determinandNoFilter by determinand code (e.g. 0076 for pH, 0180 for dissolved oxygen). Use the determinands tool to look up codes.
point_notationYesSampling point notation ID (e.g. 'AN-011262')
compliance_onlyNoIf true, return only compliance samples
sampling_purposeNoFilter by sampling purpose code(s), comma-separated for multiple
sample_material_typeNoFilter by sample material type code(s), comma-separated for multiple
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds useful context about what data is included in observations (determinand, result, unit, date, compliance status) and filtering capabilities, but it does not disclose behavioral traits like pagination details (implied by skip/limit) or rate limits beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and following with filter options in a single, efficient sentence. Every part adds value without redundancy, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (10 parameters, no output schema) and rich annotations, the description is mostly complete by covering purpose, data included, and filtering. However, it lacks details on output format or pagination behavior, which could be helpful since there's no output schema, slightly reducing completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 10 parameters. The description adds marginal value by summarizing filter options (determinand, date range, material type, compliance status) but does not provide additional syntax or format details beyond what the schema already specifies, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('List water quality observations') and resources ('for a specific sampling point'), distinguishing it from siblings like 'ea-wq-samples-54a' or 'ea-wq-sampling-points-54a' by focusing on measurements rather than samples or points themselves. It includes key details like determinands, result values, and compliance status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage by listing filter options (determinand, date range, material type, compliance status) and specifying the required 'point_notation' parameter, but it does not explicitly state when to use this tool versus alternatives like 'ea-wq-samples-54a' or provide exclusions, keeping it from a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-wq-samples-54aA
Read-onlyIdempotent
Inspect

List water quality samples (collection events) for a specific sampling point. Each sample represents a physical collection at the monitoring site on a specific date. Filter by date, material type, sampling purpose, or compliance status.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoFilter by exact date (YYYY-MM-DD). Cannot be used with date_from/date_to.
skipNoNumber of records to skip for pagination
limitNoMaximum number of results to return (max 250)
date_toNoFilter by date range end (YYYY-MM-DD). Must be used with date_from.
date_fromNoFilter by date range start (YYYY-MM-DD). Must be used with date_to.
point_notationYesSampling point notation ID (e.g. 'AN-011262')
compliance_onlyNoIf true, return only compliance samples
sampling_purposeNoFilter by sampling purpose code(s), comma-separated for multiple
sample_material_typeNoFilter by sample material type code(s), comma-separated for multiple
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating a safe, repeatable read operation. The description adds useful context about what samples represent (physical collections at monitoring sites) and filtering capabilities, but does not disclose behavioral traits like rate limits, auth needs, or pagination details beyond the schema's skip/limit parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the first sentence stating the core purpose and subsequent sentences adding essential details about sample representation and filtering options. Every sentence earns its place without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, 1 required) and rich annotations (readOnlyHint, idempotentHint, destructiveHint), the description is mostly complete. It covers purpose, sample context, and filtering, but lacks output details (no output schema) and could better differentiate from siblings. For a read-only list tool, this is sufficient but not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 9 parameters. The description adds marginal value by listing filterable fields (date, material type, sampling purpose, compliance status) and implying the 'point_notation' requirement, but does not provide additional semantics beyond what the schema already specifies, such as format details or interdependencies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('List water quality samples') and resources ('for a specific sampling point'), and distinguishes it from siblings like 'ea-wq-observations-54a' by focusing on collection events rather than observations. It specifies that each sample represents a physical collection at a monitoring site on a specific date, adding clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool by mentioning filtering options (date, material type, sampling purpose, compliance status) and the required 'point_notation' parameter. However, it does not explicitly state when not to use it or name alternatives among siblings, such as 'ea-wq-observations-54a' for observations instead of samples.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-wq-sampling-point-54aA
Read-onlyIdempotent
Inspect

Get detailed information about a specific EA water quality sampling point by its notation. Returns the point's name, geographic location, status, type, region, area, and links to its observations.

ParametersJSON Schema
NameRequiredDescriptionDefault
point_notationYesSampling point notation ID (e.g. 'AN-011262', 'MD-009632')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, and non-destructive behavior. The description adds value by specifying the return content (e.g., name, location, status, links to observations) and that it's for a 'specific' point, which helps the agent understand the scope and output. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the purpose and efficiently lists the returned information. Every part earns its place with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 required parameter, no output schema), the description is complete enough. It covers purpose, usage context, and output details. With annotations handling safety, it could slightly improve by mentioning error cases or prerequisites, but it's largely sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'point_notation' fully documented in the schema. The description does not add extra semantic details about the parameter beyond what the schema provides (e.g., no additional examples or constraints), so it meets the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'EA water quality sampling point', specifies it's 'detailed information' about a 'specific' point, and lists the returned fields (name, location, status, etc.). It distinguishes from siblings like 'ea-wq-sampling-points-54a' (likely a list tool) by focusing on a single point via its notation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you need details for a specific sampling point by its notation, but does not explicitly state when not to use it or name alternatives. Siblings include 'ea-wq-sampling-points-54a' (likely for listing points) and 'ea-wq-observations-54a' (for observations), providing some contextual differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ea-wq-sampling-points-54aA
Read-onlyIdempotent
Inspect

List and filter Environment Agency water quality sampling points across England. Search by region, area, status, type, name, or geographic radius. Returns sampling point locations, types, and status for rivers, lakes, groundwater, and coastal monitoring sites.

ParametersJSON Schema
NameRequiredDescriptionDefault
areaNoFilter by area code within a region
skipNoNumber of records to skip for pagination
limitNoMaximum number of results to return (max 250)
radiusNoSearch radius in kilometres. Must be used with 'latitude' and 'longitude'.
regionNoFilter by region code (e.g. 'AN' for Anglian, 'TH' for Thames, 'NW' for North West)
latitudeNoLatitude for radius-based search (WGS84). Must be used with 'longitude' and 'radius'.
sub_areaNoFilter by sub-area code
longitudeNoLongitude for radius-based search (WGS84). Must be used with 'latitude' and 'radius'.
pref_labelNoFilter by name (contains match, e.g. 'Thames')
sampling_point_typeNoFilter by type code(s), comma-separated for multiple (e.g. 'FJ,SA')
sampling_point_statusNoFilter by status code: 'O' (Open), 'C' (Closed)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, non-destructive read operation. The description adds useful context by specifying the geographic scope ('across England') and the types of monitoring sites, but it does not disclose behavioral traits like rate limits, authentication needs, or pagination details beyond what the schema provides for 'skip' and 'limit'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and search options, and the second specifies the return data. Every sentence adds value without redundancy, making it front-loaded and easy to parse for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (11 parameters, no output schema), the description is reasonably complete. It covers the purpose, search criteria, and return types. However, without an output schema, it could benefit from more detail on the structure of returned data (e.g., fields in the response), though the annotations help by indicating it's a read-only operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning all parameters are well-documented in the schema itself. The description adds minimal value by listing search criteria (e.g., 'region, area, status, type, name, or geographic radius'), which aligns with schema parameters but does not provide additional semantics or usage examples beyond what the schema descriptions already cover.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('List and filter') and resources ('Environment Agency water quality sampling points across England'), and it distinguishes itself from siblings by focusing on sampling points rather than other environmental data like flood warnings or carbon intensity. The mention of specific water body types (rivers, lakes, groundwater, coastal) adds further specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool by listing search criteria (region, area, status, type, name, geographic radius) and the types of data returned. However, it does not explicitly mention when not to use it or name specific alternatives among the sibling tools, such as 'ea-wq-samples-54a' or 'ea-wq-observations-54a', which might be relevant for related data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

met-office-3hourly-forecast-026A
Read-onlyIdempotent
Inspect

Get a three-hourly weather forecast for any location worldwide from the Met Office Global Spot model. Returns 168 hours (7 days) of three-hourly data including temperature, feels-like temperature, wind, humidity, precipitation, visibility, and weather type. Longer horizon than hourly but lower time resolution.

ParametersJSON Schema
NameRequiredDescriptionDefault
latitudeYesLatitude of the location (-90 to 90, e.g. 51.5074 for London)
longitudeYesLongitude of the location (-180 to 180, e.g. -0.1278 for London)
include_location_nameNoInclude the nearest named location in the response (default false)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds useful context about the forecast duration (168 hours/7 days), time resolution (three-hourly), and data fields returned, which goes beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the core purpose and key details, the second provides comparative context. Every element serves a purpose, though the second sentence could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (weather forecasting), rich annotations, and comprehensive input schema, the description provides good contextual completeness. It explains what data is returned and how it differs from alternatives. The main gap is the lack of output schema, but the description compensates by listing the returned data fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all three parameters. The description doesn't add any parameter-specific information beyond what's in the schema, but doesn't need to since the schema is comprehensive. This meets the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get a three-hourly weather forecast'), resource ('for any location worldwide from the Met Office Global Spot model'), and distinguishes it from siblings by mentioning 'Longer horizon than hourly but lower time resolution' compared to the hourly forecast sibling tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Longer horizon than hourly but lower time resolution'), which implicitly suggests using it over the hourly forecast sibling when longer-range forecasts are needed. However, it doesn't explicitly name alternatives or state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

met-office-get-daily-forecast-026A
Read-onlyIdempotent
Inspect

Get a daily weather forecast for any location worldwide from the Met Office Global Spot model. Returns 7 days (1 past + 6 future) of daily data with 41 parameters including max/min temperature, wind, UV index, precipitation probability, sunrise/sunset times, and weather type. Best for multi-day weather overviews.

ParametersJSON Schema
NameRequiredDescriptionDefault
latitudeYesLatitude of the location (-90 to 90, e.g. 51.5074 for London)
longitudeYesLongitude of the location (-180 to 180, e.g. -0.1278 for London)
include_location_nameNoInclude the nearest named location in the response (default false)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, establishing safety. The description adds valuable context beyond annotations: it specifies the return format ('7 days with 41 parameters'), data scope ('1 past + 6 future'), and examples of included parameters. It doesn't mention rate limits or authentication needs, but with good annotation coverage, this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first covers purpose and scope, the second adds usage guidance. Every phrase adds value without redundancy, and it's appropriately front-loaded with core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema), the description provides good context: purpose, data format, usage guidance, and behavioral details. With annotations covering safety and idempotency, and schema covering parameters, the main gap is lack of output schema, but the description partially compensates by describing return data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters well-documented in the schema (latitude, longitude, include_location_name). The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3 when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get a daily weather forecast'), resource ('for any location worldwide from the Met Office Global Spot model'), and scope ('worldwide'). It distinguishes from sibling tools like 'met-office-3hourly-forecast-026' and 'met-office-get-hourly-forecast-026' by specifying 'daily' data and '7 days (1 past + 6 future)' timeframe.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Best for multi-day weather overviews'), which implicitly suggests alternatives like hourly forecasts for more granular data. However, it doesn't explicitly name when NOT to use it or mention specific sibling tools as alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

met-office-get-hourly-forecast-026A
Read-onlyIdempotent
Inspect

Get an hourly weather forecast for any location worldwide from the Met Office Global Spot model. Returns 48 hours of hourly data including temperature, wind speed, humidity, precipitation probability, visibility, and weather type. Provide latitude and longitude coordinates.

ParametersJSON Schema
NameRequiredDescriptionDefault
latitudeYesLatitude of the location (-90 to 90, e.g. 51.5074 for London)
longitudeYesLongitude of the location (-180 to 180, e.g. -0.1278 for London)
include_location_nameNoInclude the nearest named location in the response (default false)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond this by specifying the forecast model ('Met Office Global Spot model'), timeframe ('48 hours'), and data fields included (temperature, wind speed, etc.), which helps the agent understand the tool's behavior and output scope. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the tool's purpose, source, timeframe, and data fields; the second specifies the required inputs. Every sentence adds essential information with zero redundancy, making it easy for an agent to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (weather forecasting with 3 parameters), rich annotations (read-only, idempotent, non-destructive), and 100% schema coverage, the description is largely complete. It clearly explains what the tool does, the data returned, and the required inputs. The main gap is the lack of an output schema, but the description compensates by listing the returned data fields, making it sufficient for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters well-documented in the schema itself (latitude, longitude ranges and examples; include_location_name default and purpose). The description adds minimal parameter semantics beyond the schema, only mentioning 'Provide latitude and longitude coordinates' without explaining the optional third parameter. This meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get an hourly weather forecast'), resource ('any location worldwide from the Met Office Global Spot model'), and scope ('48 hours of hourly data including temperature, wind speed, humidity, precipitation probability, visibility, and weather type'). It effectively distinguishes this tool from its sibling 'met-office-3hourly-forecast-026' and 'met-office-get-daily-forecast-026' by specifying the hourly granularity and 48-hour timeframe.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating 'Provide latitude and longitude coordinates' and specifying the data source ('Met Office Global Spot model'), but it doesn't explicitly state when to use this tool versus its hourly, 3-hourly, or daily forecast siblings. There's no guidance on prerequisites, alternatives, or exclusions, leaving the agent to infer usage context from the tool name and data granularity alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mo-find-nearest-station-e8bA
Read-onlyIdempotent
Inspect

Find the nearest Met Office weather observation station to a UK location. Returns the station's geohash identifier, area name, region, and country. Use the returned geohash with the observations tool to get actual measured weather data from that station.

ParametersJSON Schema
NameRequiredDescriptionDefault
latitudeYesLatitude of the location (UK coverage, e.g. 51.5074 for London)
longitudeYesLongitude of the location (e.g. -0.1278 for London)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context about the return format (geohash, area name, region, country) and the purpose of the geohash for downstream use, which isn't covered by annotations. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states the tool's purpose and return values, the second provides crucial usage guidance. Every word serves a clear purpose, and the most important information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup tool with good annotations (read-only, idempotent) and full schema coverage, the description provides exactly what's needed: clear purpose, return format explanation, and guidance on how to use the output with other tools. No output schema exists, but the description adequately describes the return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters (latitude, longitude) well-documented in the schema including UK-specific ranges and examples. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('find'), resource ('nearest Met Office weather observation station'), and scope ('to a UK location'). It distinguishes from sibling tools like 'mo-get-observations-e8b' and 'mo-observations-by-location-e8b' by specifying this tool returns station metadata rather than actual weather data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('to get the station's geohash identifier') and when to use an alternative ('Use the returned geohash with the observations tool to get actual measured weather data'). This provides clear guidance on tool selection versus sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mo-get-observations-e8bA
Read-onlyIdempotent
Inspect

Get the past 48 hours of actual measured weather observations from a Met Office UK ground station. Returns hourly readings of temperature, humidity, pressure, wind speed, wind gusts, wind direction, visibility, weather code, and pressure tendency. Provide the station geohash (from the find-nearest-station tool).

ParametersJSON Schema
NameRequiredDescriptionDefault
geohashYes6-character geohash of the observation station (from the find-nearest-station tool). Example: 'gcj8ds' for a Devon station.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond annotations: the 48-hour time window constraint, specific weather metrics returned, and the prerequisite of using another tool for geohash input. However, it doesn't mention rate limits, authentication needs, or error conditions, which would be helpful additional behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states purpose, scope, and return data; the second specifies the required parameter and its source. Every element serves a clear purpose with zero redundancy. It's front-loaded with the core functionality and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with good annotations and a simple single parameter, the description provides adequate context. It covers what data is returned, temporal constraints, and parameter requirements. The main gap is the lack of output schema, so the description doesn't detail the structure of returned observations. However, given the tool's straightforward nature and comprehensive annotations, the description is mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the geohash parameter fully documented in the schema. The description adds minimal semantic value beyond the schema, only reiterating that the geohash comes from 'find-nearest-station tool' and specifying the 6-character length. This meets the baseline of 3 for high schema coverage, but doesn't provide additional parameter insights like format examples or validation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get'), resource ('weather observations'), temporal scope ('past 48 hours'), source ('Met Office UK ground station'), and data format ('hourly readings'). It distinguishes from sibling tools like 'mo-find-nearest-station-e8b' by focusing on observation retrieval rather than station location, and from forecast tools by specifying actual measured data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: after obtaining a station geohash from 'find-nearest-station tool'. It doesn't explicitly state when not to use it or name alternatives, but the temporal scope and data type differentiate it from forecast siblings. The guidance is practical but lacks explicit exclusions or comparison with similar tools like 'mo-observations-by-location-e8b'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mo-observations-by-location-e8bA
Read-onlyIdempotent
Inspect

Get the past 48 hours of actual measured weather observations for a UK location. Automatically finds the nearest Met Office ground station and returns hourly readings of temperature, humidity, pressure, wind speed, wind gusts, wind direction, visibility, weather code, and pressure tendency. Provide latitude and longitude.

ParametersJSON Schema
NameRequiredDescriptionDefault
latitudeYesLatitude of the location (UK coverage, e.g. 51.5074 for London)
longitudeYesLongitude of the location (e.g. -0.1278 for London)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, and non-destructive behavior, but the description adds valuable context beyond this: it specifies the data source ('Met Office ground station'), the automatic selection process ('Automatically finds the nearest'), the time window ('past 48 hours'), and the specific weather variables returned. This enriches the agent's understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by details on automation and data points, and ends with the parameter instruction. Every sentence adds necessary information without redundancy, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema), the description is mostly complete: it covers purpose, data source, automation, returned variables, and parameters. However, it lacks details on output format (e.g., structure of returned observations) and any potential limitations (e.g., station availability), which would enhance completeness for a tool without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the latitude and longitude parameters with ranges and examples. The description adds minimal value beyond the schema by mentioning 'Provide latitude and longitude,' which restates what's obvious. It does not explain parameter interactions or additional semantics, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get'), resource ('actual measured weather observations'), temporal scope ('past 48 hours'), and geographic scope ('for a UK location'), distinguishing it from sibling tools like forecasts or station-finding tools. It precisely defines what the tool does without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to get recent weather observations for a UK location), but it does not explicitly mention when not to use it or name specific alternatives among siblings. It implies usage for observational data rather than forecasts, which is helpful but not fully explicit about alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.