Skip to main content
Glama

Hermes — Air Quality Intelligence

Server Details

UK air quality MCP: live readings, historical trends, LAQM stats, WHO checks, knowledge base.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 15 of 15 tools scored. Lowest: 3.6/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct need: assess_location_aq is a comprehensive one-call; chart_aq_trend generates charts; compare_locations does side-by-side comparison; get_aqi_summary focuses on index and WHO compliance; get_current_aq gives raw readings; get_historical_aq provides historical data; kb_* tools serve knowledge base queries; list_monitors lists monitoring sites; regulatory_stats gives LAQM statistics; time_patterns shows temporal patterns; trend_analysis does long-term trend analysis. No two tools have overlapping purposes.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case: assess_location_aq, chart_aq_trend, compare_locations, get_aqi_summary, get_current_aq, get_historical_aq, list_monitors, regulatory_stats, time_patterns, trend_analysis, and the kb_* tools also follow that pattern with the kb_ prefix. Naming is highly predictable and uniform.

Tool Count5/5

With 15 tools, the server's scope is well-balanced. Each tool serves a clear purpose without redundancy, covering current data, historical data, comparisons, monitoring info, regulatory stats, and knowledge base. The count feels appropriate for a domain-specific intelligence server.

Completeness5/5

The tool surface covers the full range of air quality needs: current assessment, historical analysis, trend detection, comparisons, monitoring network info, regulatory compliance, and knowledge base on guidelines, health effects, and practical advice. There are no obvious gaps for a read-only service; the specialized tools ensure agents can answer diverse queries without dead ends.

Available Tools

15 tools
assess_location_aqAInspect

Comprehensive air quality assessment for a location in one call.

Combines nearby monitor discovery and current readings with DAQI into a single response. Use this as the first tool call for any air quality question about a location.

For long-term trend analysis, use the dedicated trend_analysis tool.

Returns a structured 'summary' dict with purpose-appropriate sections. Present the summary description to users first.

Args: location: Postcode, place name, or "lat,lon". purpose: What the user needs — "general" (default), "health" (safety/worry), "exercise" (outdoor activity), or "planning" (homebuying/school assessment/long-term).

ParametersJSON Schema
NameRequiredDescriptionDefault
purposeNogeneral
locationYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses that it returns a structured 'summary' dict with purpose-appropriate sections and that it combines nearby monitor discovery with current readings. It does not mention error cases or authentication, but it adequately describes the tool's behavior and output structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a bold summary line, bullet-like flow, and parameter list. It front-loads the essential purpose and usage note. While the Args section repeats schema info, it adds value by explaining values. It is slightly verbose but still efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity and the 2 parameters, the description is quite complete: it explains what the tool does, when to use it, its output format, and parameter details. No output schema exists, but it describes the return structure. For a first-call tool with many siblings, this provides sufficient context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description provides full semantics: location is described as 'Postcode, place name, or "lat,lon"' and purpose is explained with its allowed values ('general', 'health', 'exercise', 'planning') and default. This adds meaning far beyond the schema's type and default fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it performs a comprehensive air quality assessment (combines monitor discovery and current readings with DAQI). It distinguishes from siblings by explicitly naming the trend_analysis tool for long-term trends, and the context of sibling tools shows it is a first-call aggregator.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use this as the first tool call for any air quality question about a location.' It also provides an explicit when-not condition: 'For long-term trend analysis, use the dedicated trend_analysis tool.' This gives clear guidance on when to use and when to avoid.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

chart_aq_trendAInspect

Generate a time series chart of air quality data.

Returns a PNG chart image with a brief text summary. Use this when users ask about trends, patterns, or want to visualise air quality over time.

Args: start_date: Start date (ISO format, e.g. "2025-01-01"). end_date: End date (ISO format). location: Postcode, place name, or "lat,lon". Provide this or site_code. site_code: Direct site code. Provide this or location. pollutants: Optional filter, e.g. ["NO2", "PM2.5"]. Defaults to NO2, PM2.5, PM10, O3 if not specified. frequency: "hourly", "daily", or "monthly" (default "daily"). show_who_guidelines: Show WHO guideline reference lines (default True). show_daqi_bands: Show DAQI band background shading (default True).

ParametersJSON Schema
NameRequiredDescriptionDefault
end_dateYes
locationNo
frequencyNodaily
site_codeNo
pollutantsNo
start_dateYes
show_daqi_bandsNo
show_who_guidelinesNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description carries burden. Describes return type (PNG+summary), parameter effects, and default behaviors. Lacks info on error handling or rate limits, but sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Compactly structured: brief summary, usage context, clear parameter list. No wasted words, every sentence provides necessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers all 8 parameters and return type. Could include more details about chart aesthetics or text summary content, but overall adequate for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% coverage, but description thoroughly explains each parameter: date formats, location vs site_code mutual exclusivity, default pollutants, frequency enums, and boolean defaults. Adds significant value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it generates a time series chart of air quality data, returning a PNG image with text summary. Distinguishes from sibling tools like trend_analysis and time_patterns which likely do other analyses.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (trends, patterns, visualization). Could improve by noting when not to use or alternatives, but provides clear guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_locationsAInspect

Compare current air quality across multiple locations side-by-side.

Returns a ranked comparison by pollutant with DAQI bands and distance to nearest monitor. Useful for comparing development sites, school locations, or residential options.

Args: locations: List of 2–6 locations (postcodes, place names, or "lat,lon"). pollutants: Optional filter, e.g. ["NO2", "PM2.5"]. Default: all available.

ParametersJSON Schema
NameRequiredDescriptionDefault
locationsYes
pollutantsNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses behavior: ranked comparison, DAQI bands, distance to nearest monitor, input constraints (2-6 locations). Could mention data freshness or rate limits but covers the core behavior well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise with front-loaded purpose and clear structure. The Args section is efficient. Slightly verbose in the example use cases, but overall well-organized and not wasteful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers key aspects: input constraints, output content, use cases. No output schema, but description hints at return structure. Could mention error handling or limit on number of locations, but sufficient for a comparison tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, but the description fully explains both parameters: locations format (postcodes, place names, lat,lon) and count constraint, pollutants optional filter with example. Adds significant meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'compare' and resource 'locations', and adds 'side-by-side', clearly distinguishing from siblings like get_current_aq or assess_location_aq that focus on single locations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides concrete use cases (development sites, school locations, residential options) and implies it is for multi-location comparison. Does not explicitly list alternatives or when-not-to-use, but context with sibling tools makes guidance clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_aqi_summaryAInspect

Get an AQI assessment with health advice and WHO compliance check.

Returns a 'summary' with a plain-English health assessment, advice for general and at-risk populations, and WHO guideline context. Present the summary to users first. Also returns raw 'aqi' and 'who_compliance' data.

Args: location: Postcode, place name, or "lat,lon". index: AQI system — "UK_DAQI" (default), "WHO", or "US_EPA". period: "current", "today", "this_week", or "this_month".

ParametersJSON Schema
NameRequiredDescriptionDefault
indexNoUK_DAQI
periodNocurrent
locationYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden of behavioral disclosure. It states the tool returns data without side effects, but does not explicitly confirm it is read-only, describe error conditions, or mention authorization needs. The behavior is mostly transparent but could be more explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the purpose in the first sentence, then details the return structure and parameters. It is moderately concise; the docstring-style parameter list is acceptable but could be slightly tightened without losing information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the return value (summary with health advice, raw aqi, who_compliance) and covers all three parameters. It does not address error handling or limitations, but for a straightforward retrieval tool with no output schema, this is adequate overall.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description adds significant meaning: it explains that location accepts postcode, place name, or lat,lon; index lists the AQI systems with defaults; period gives time range options. This goes well beyond the schema's property titles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns an AQI assessment with health advice and WHO compliance check, which is a specific verb+resource. It distinguishes from siblings like get_current_aq (raw data) and assess_location_aq by emphasizing the summary with health advice.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for obtaining a human-readable summary, but it does not explicitly state when to use it over alternatives or when not to use it. It provides presentation guidance ('Present the summary to users first') but lacks explicit usage context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_current_aqAInspect

Get the most recent air quality readings near a location, with health context.

Returns a 'summary' with a plain-English health assessment and advice for general and at-risk populations. Present the summary to users first. Also returns individual 'readings' from nearby monitors and 'metadata' about data freshness and sources.

Args: location: Postcode, place name, or "lat,lon". radius_km: Search radius in kilometres (default 2.0). pollutants: Optional filter, e.g. ["NO2", "PM2.5"]. sources: Optional filter, e.g. ["AURN", "BREATHE_LONDON"].

ParametersJSON Schema
NameRequiredDescriptionDefault
sourcesNo
locationYes
radius_kmNo
pollutantsNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description discloses return structure (summary, readings, metadata) and advises on using summary first. It mentions data freshness in metadata, though does not elaborate on rate limits or authentication.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise and well-structured: a clear purpose sentence followed by output explanation and parameter list. No redundant content; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers output structure and all parameters. Lacks mention of error cases or units, but given no output schema and sibling tools for other queries, it is reasonably complete for its purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so description compensates by explaining each parameter's purpose. However, the description states a default of 2.0 for radius_km while the schema specifies 3.5, causing inconsistency that may confuse agents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool gets recent air quality readings with health context, specifying verb and resource. It distinguishes from siblings like get_historical_aq by explicitly mentioning 'most recent' and 'health context'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides guidance on presenting output to users but lacks explicit instructions on when to use this tool versus siblings (e.g., historical vs current, summary vs full readings). No when-not-to-use or alternatives mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_historical_aqAInspect

Get historical air quality data for a site or location, with health context.

Returns a 'narrative' with plain-English interpretation of trends, WHO guideline exceedances, and guideline comparisons. Present the narrative to users first. Also returns raw 'data' and 'summary' statistics.

Args: start_date: Start date (ISO format, e.g. "2025-01-01"). end_date: End date (ISO format). location: Postcode, place name, or "lat,lon". Provide this or site_code. site_code: Direct site code. Provide this or location. pollutants: Optional filter, e.g. ["NO2", "PM2.5"]. frequency: "hourly", "daily", or "monthly" (default "daily").

ParametersJSON Schema
NameRequiredDescriptionDefault
end_dateYes
locationNo
frequencyNodaily
site_codeNo
pollutantsNo
start_dateYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It describes the output structure (narrative, data, summary) but omits behavioral traits such as whether it is read-only, error conditions, rate limits, or required permissions. For a historical data tool, it does not clarify data freshness, coverage limits, or side effects (none expected).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose sentence, output summary, then parameter list. The parameter list is formatted as a docstring, which is slightly longer but clearly organized. It avoids redundancy and is front-loaded with the most important information (output type and narrative instruction).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description covers inputs thoroughly and explains the three components of the return (narrative, data, summary). It does not detail the structure of 'data' or 'summary', but for a typical air quality tool, this is sufficient. It also provides health context guidance ('Present the narrative to users first').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, leaving the description to compensate. It provides detailed explanations for each parameter: format examples (ISO dates), usage conditions ('Provide this or site_code'), optional filters, and default values. This adds significant meaning beyond the raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Get historical air quality data for a site or location, with health context.' It specifies the verb (get), resource (historical air quality data), and adds health context. The sibling tools (e.g., get_current_aq, chart_aq_trend) are distinct, and this description effectively differentiates by focusing on historical data and output narrative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides guidance on how to use parameters (e.g., 'Provide this or site_code') and instructs to present narrative first, but it does not explicitly state when to use this tool versus siblings like chart_aq_trend or trend_analysis. There is no comparison or exclusion criteria, so the agent must infer usage from tool names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

kb_get_guidelinesAInspect

Get current air quality guideline and target values.

Args: framework: "WHO", "UK", "EU", or "all" (default). pollutant: Optional filter for a specific pollutant.

Returns the full guidelines document (markdown).

ParametersJSON Schema
NameRequiredDescriptionDefault
frameworkNoall
pollutantNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It implies a read operation ('Get') and states the return format (markdown). While not exhaustive, it sufficiently discloses the tool's basic behavior for a simple retrieval.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences for purpose and a bullet-like list for parameters. It is front-loaded and every sentence is informative without repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has an output schema, so details on return values are unnecessary. The description covers parameters and the markdown return format. It could mention potential errors or that it is read-only, but overall is sufficient for a simple knowledge base tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description adds meaning by specifying valid values for 'framework' (WHO, UK, EU, all) and clarifying that 'pollutant' is an optional filter. This goes beyond the schema's string type hints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get current air quality guideline and target values.' It uses a specific verb and resource, and distinguishes itself from siblings like kb_get_health_effects by focusing on guidelines.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs. siblings. It lists arguments but does not specify contexts or exclusions, leaving the agent to infer usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

kb_get_health_effectsAInspect

Get health evidence summaries for a pollutant and population group.

Args: pollutant: "NO2", "PM2.5", "PM10", or "O3". population_group: "general", "children", "elderly", "respiratory", "cardiovascular", or "pregnant" (default "general").

Returns health effects document (markdown).

ParametersJSON Schema
NameRequiredDescriptionDefault
pollutantYes
population_groupNogeneral

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description states that the tool returns a markdown document, which is useful. However, with no annotations, it lacks disclosure of other behavioral traits such as authentication needs, rate limits, or whether the operation is read-only. More context would improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise, with a clear main sentence followed by an Args section and a Returns line. Every sentence adds value and there is no redundant information. The structure is front-loaded and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with two parameters and an output schema, the description covers parameters and return format adequately. However, it does not mention error handling, prerequisites (e.g., knowledge base availability), or any conditions that might affect the result. Slight gap in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage, but the description fully compensates by listing explicit allowed values for both parameters (e.g., pollutant: NO2, PM2.5, etc.; population_group: general, children, etc.) and stating the default for population_group. This adds significant meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description starts with a clear verb ('Get') and resource ('health evidence summaries') and specifies the scope ('for a pollutant and population group'). It is distinct from sibling tools like kb_get_guidelines and kb_get_local_context, which cover different knowledge areas.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus alternatives among the sibling kb_get_* tools. It simply states what it does, leaving the agent to infer usage context from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

kb_get_local_contextAInspect

Get area-specific contextual information.

Args: area: "southwark", "london", or a specific neighbourhood.

Returns local context document (markdown).

ParametersJSON Schema
NameRequiredDescriptionDefault
areaNosouthwark

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits itself. It states the tool 'returns local context document (markdown)', implying a read-only operation with no side effects. This is adequate for a simple retrieval tool, though it doesn't detail internal behavior (e.g., data source).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two short sentences plus an argument listing. Every part is essential, with no filler or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple with one parameter and an output schema, so expectations are moderate. However, the description does not explicitly connect this tool to the air quality domain (despite sibling tools being AQ-related), which could leave the agent uncertain about its scope. The output format is noted but could be more specific.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, so the description must add meaning. It lists allowed values ('southwark', 'london', or a specific neighbourhood) and notes the default, which goes beyond the schema's bare property definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves area-specific contextual information, using the verb 'Get' and specifying the resource. While it distinguishes from sibling tools (e.g., 'kb_get_guidelines') by focusing on local context, it does not explicitly contrast them, so a 4 is appropriate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like other 'kb_' tools. It does not specify prerequisites or exclusions, leaving the agent without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

kb_get_monitoring_explainerAInspect

Get plain-language explanations of monitoring methods and limitations.

Args: topic: One of "how_monitors_work", "regulatory_vs_lowcost", "what_readings_mean", "why_numbers_differ", "representativeness", "data_quality", "what_monitors_measure".

Returns monitoring explainer document (markdown).

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It clearly states the tool returns a markdown document and is non-destructive (implied by 'Get'). This is adequate for a simple read operation, but could mention that it is read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise: two sentences for purpose and a bulleted list of valid topics. Every sentence adds value; there is no redundancy or fluff. It is appropriately front-loaded with the main purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (one parameter, returns markdown), the description is largely complete. It specifies the return format and lists acceptable inputs. However, it does not mention any error conditions or edge cases, but these are likely minimal for a knowledge retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides only a required 'topic' string with no description or enum, yielding 0% schema coverage. The description compensates fully by listing all valid topic values in a clear and readable format, adding significant meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'plain-language explanations of monitoring methods and limitations' and lists valid topics. The verb 'Get' and resource 'monitoring explainer document' are specific, and it is distinguishable from sibling tools that focus on current data, trends, or other knowledge base topics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit guidance on when to use this tool versus the many sibling knowledge base tools (e.g., kb_get_guidelines, kb_get_health_effects). There is no mention of alternatives or exclusions, which is a significant gap given the number of similar kb tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

kb_get_practical_adviceAInspect

Generic protective-action guidance for a category of situation (NOT keyed to an individual user's context).

For personalised advice that takes the user's specific health situation into account (asthma, pregnancy, gas cooker, tube commute, indoor sources), prefer the Clara MCP server's contextual_advice tool — it composes Hermes live readings with personal context to give an answer keyed to this user, now. Use this KB tool only as a fallback or when Clara is not available.

Args: situation: One of "high_pollution_day", "commuting", "exercise", "school_run", "indoor_air", "planning_objection", "pregnancy", "child_asthma".

Returns practical advice document (markdown).

ParametersJSON Schema
NameRequiredDescriptionDefault
situationYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It clarifies the generic vs. personalized distinction but does not explicitly state side effects or read-only nature. It mentions return format (markdown) but could be more thorough on behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with purpose and usage guidance, followed by argument description. Every sentence adds value; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given output schema exists and tool is simple retrieval, description covers purpose, usage, argument, and return type (markdown). Complete for this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0% coverage (no description for 'situation'), but the description lists all valid values explicitly: 'high_pollution_day', 'commuting', etc., adding crucial meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it provides generic protective-action guidance for a category of situation, and distinguishes from personalized sibling tool (contextual_advice on Clara server). Verb+resource is specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says to use this tool as a fallback or when Clara is not available, and recommends the contextual_advice alternative when personalization is needed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_monitorsAInspect

List air quality monitoring sites near a location, with context.

Returns a 'summary' explaining how many monitors were found, their operational status, and what each monitor type represents. Present the summary to users first. Also returns a 'monitors' list with full metadata.

Args: location: Postcode, place name, or "lat,lon". radius_km: Search radius in kilometres (default 5.0). sources: Optional filter, e.g. ["AURN", "BREATHE_LONDON"].

ParametersJSON Schema
NameRequiredDescriptionDefault
sourcesNo
locationYes
radius_kmNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description clearly outlines the return structure (summary string and monitors list) and suggests presenting the summary first. However, it does not disclose behaviors like pagination, rate limits, or error handling. With no annotations, it handles the burden reasonably but not fully.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two introductory sentences, a bullet indicating output structure, and three parameter lines. Every sentence adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers purpose, input parameters, and output structure adequately for a listing tool. It lacks any mention of error scenarios or data source availability, but for its complexity, it is nearly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

For all three parameters, the description provides clear, helpful semantics beyond the schema. Location types, radius unit/default, and source examples are given. Given 0% schema coverage, this fully compensates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifically states 'List air quality monitoring sites near a location, with context.' This clearly identifies the action (list) and resource (monitoring sites), distinguishing it from sibling tools that assess or analyze data rather than list sites.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies usage for listing monitors and returning a summary, it provides no explicit guidance on when to use this tool versus siblings like get_current_aq or assess_location_aq. No when-not or alternative recommendations are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

regulatory_statsAInspect

Get LAQM Annual Status Report-style statistics for a location.

Returns annual means, percentiles, exceedance counts, data capture percentages, and compliance assessment against UK legal limits and WHO guidelines.

Args: location: Postcode, place name, or "lat,lon". year: Calendar year to report on (default: most recent complete year). pollutants: Optional filter, e.g. ["NO2", "PM2.5"]. Default: all available.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
locationYes
pollutantsNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full behavioral burden. It discloses that the tool returns annual statistics, exceedances, and compliance metrics, but it does not mention whether it is read-only, data sources, update frequency, or any side effects. The description is functional but could be more transparent about behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured, with a clear opening line stating purpose followed by a bullet-like list of parameters. Every sentence adds value, and there is no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has three parameters, no output schema, and no annotations. The description explains all parameters and the type of statistics returned. However, it does not describe the return format (e.g., JSON structure), which would be helpful given the lack of an output schema. Still, it covers the essential context for using the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, but the description provides clear semantics for all three parameters: location (postcode, place name, or lat,lon), year (calendar year, default most recent), and pollutants (optional filter). This adds significant value beyond the schema titles and types.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states that it retrieves LAQM Annual Status Report-style statistics for a location, including annual means, percentiles, exceedances, data capture, and compliance assessment. It clearly distinguishes itself from sibling tools like get_current_aq (current data) and trend_analysis (trends), using a specific verb and resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains what the tool returns and its parameters, but it does not explicitly state when to use this tool versus alternatives. There is no guidance on when not to use it or prerequisites. While the sibling tools imply different use cases, explicit usage context is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

time_patternsAInspect

Analyse when pollution is highest — hour of day, day of week, and month.

Returns temporal profiles showing typical patterns. Useful for advising on best times for outdoor exercise, school runs, or commuting.

Args: location: Postcode, place name, or "lat,lon". pollutant: Pollutant to analyse — "NO2", "PM2.5", "PM10", "O3" (default "NO2"). period: Time window — "last_month", "last_3_months", "last_6_months", or "last_year" (default).

ParametersJSON Schema
NameRequiredDescriptionDefault
periodNolast_year
locationYes
pollutantNoNO2
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description states it returns temporal profiles but does not disclose behavioral traits such as data source, freshness, or any limitations like location support scope.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with a clear purpose statement followed by a structured Args section. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters and no annotations or output schema, the description covers the main purpose, parameter choices, and use cases. It could mention return type (e.g., JSON or plot) but is otherwise complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description explains all three parameters in detail: location format, pollutant options with default, and period options with default, adding significant value beyond the schema names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool analyzes when pollution is highest by hour, day, and month, and returns temporal profiles. It differentiates from sibling tools like trend_analysis and chart_aq_trend by focusing on periodic patterns for practical advice.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage (best times for outdoor activities) but does not explicitly mention when not to use or list alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trend_analysisAInspect

Analyse the long-term trend in a pollutant near a location.

Uses Theil-Sen slope estimation with Mann-Kendall significance testing to determine whether air quality is improving, worsening, or stable. Robust to outliers and missing data.

Returns a 'summary' with plain-English trend description and statistical details. Present the summary to users first.

Args: location: Postcode, place name, or "lat,lon". pollutant: Pollutant to analyse — "NO2", "PM2.5", "PM10", "O3" (default "NO2"). years: Number of years of data to analyse (default 5, range 2–5). Requests outside this range are clamped; the response includes metadata.years_clamped and a note in summary when so.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearsNo
locationYes
pollutantNoNO2
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description discloses statistical methods (Theil-Sen, Mann-Kendall), robustness, and clamping behavior of the 'years' parameter. It also describes the return summary structure. Lacks details on API limits or authentication, but sufficient for core behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise and well-structured: begins with purpose, then methodology, output summary instruction, and parameter details. No unnecessary sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Reasonably complete given lack of output schema. Explains output includes summary with plain-English trend and statistical details. Could be enhanced with example output or error scenarios, but adequate for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Adds significant meaning beyond input schema: explains location format (postcode, place name, lat,lon), lists pollutant options with defaults, and clarifies year range with clamping behavior and metadata notes. Schema coverage is 0%, so full burden carried by description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool analyzes long-term trends using specific statistical methods, with a clear verb and resource. Distinguishes from sibling tools like 'assess_location_aq' and 'chart_aq_trend' by specifying trend analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for trend analysis but lacks explicit guidance on when to use vs alternatives, such as 'chart_aq_trend' for visualization or 'assess_location_aq' for overall quality. No when-not or prerequisites mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources