Skip to main content
Glama
tyson-swetnam

EPA Air Quality System (AQS) MCP Server

aqs_annual_summary_by_cbsa

Retrieve annual air quality statistics for metropolitan areas by specifying pollutant codes, date range, and CBSA code to analyze yearly pollution trends and compliance metrics.

Instructions

Get annual summary data for all monitoring sites within a Core Based Statistical Area (CBSA), which represents metropolitan and micropolitan statistical areas. Annual summaries include yearly statistics such as arithmetic mean, standard deviation, maximum values, percentiles (10th through 99th), observation counts, data completeness metrics, and exceedance counts for primary and secondary NAAQS standards.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
emailNoEmail address registered with the AQS API. If not provided, uses AQS_EMAIL environment variable.
keyNoAPI key for AQS access. If not provided, uses AQS_API_KEY environment variable.
paramYesParameter code for the pollutant (e.g., "44201" for Ozone, "88101" for PM2.5, "42401" for SO2, "42101" for CO, "42602" for NO2, "81102" for PM10). Up to 5 comma-separated codes allowed.
bdateYesBegin date in YYYYMMDD format. Must be in the same calendar year as edate.
edateYesEnd date in YYYYMMDD format. Must be in the same calendar year as bdate.
cbsaYesCore Based Statistical Area code (e.g., "31080" for Los Angeles-Long Beach-Anaheim).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the tool's behavior by describing what data is returned (annual statistics, completeness metrics, exceedance counts) and the geographic scope (CBSA representing metropolitan/micropolitan areas). However, it lacks information about authentication requirements (implied by email/key parameters but not stated), rate limits, error conditions, or response format details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose in the first sentence. The second sentence provides valuable detail about what's included in the summary. However, the second sentence is quite dense with multiple statistical terms listed, which could potentially be streamlined.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a data retrieval tool with 6 parameters (4 required), 100% schema coverage, and no output schema, the description provides adequate context about what data is returned and the geographic scope. However, it lacks information about authentication (though implied by parameters), rate limits, pagination, or error handling. The absence of annotations means the description should do more to compensate, but it only partially addresses behavioral aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description adds no parameter-specific information beyond what's in the schema. It mentions 'parameter code for the pollutant' generally but doesn't provide additional context about the param field or other parameters. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('annual summary data for all monitoring sites within a Core Based Statistical Area'), and distinguishes from siblings by specifying the geographic scope (CBSA) and temporal granularity (annual vs. daily/quarterly). It provides specific details about what the summary includes (statistics, metrics, exceedance counts).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates when to use this tool by specifying 'annual summary data' and 'within a Core Based Statistical Area', which differentiates it from daily/quarterly summaries and other geographic scopes (box, county, site, state). However, it does not explicitly state when NOT to use it or name specific alternatives among the many sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tyson-swetnam/aqs-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server