Skip to main content
Glama
tyson-swetnam

EPA Air Quality System (AQS) MCP Server

aqs_daily_summary_by_site

Retrieve daily air quality summaries for a specific monitoring site, including pollutant levels, AQI values, and statistical metrics to analyze pollution trends over time.

Instructions

Get daily summary air quality data for a specific monitoring site. Daily summaries include arithmetic mean, maximum values, observation counts, and AQI values for each day. Requires state FIPS code (2-digit), county FIPS code (3-digit), and site number (4-digit).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
emailNoEmail address registered with EPA AQS API. Optional if AQS_EMAIL environment variable is set.
keyNoAPI key from EPA AQS. Optional if AQS_API_KEY environment variable is set.
paramYesParameter code (e.g., 44201 for Ozone, 88101 for PM2.5, 42401 for SO2, 42101 for CO, 42602 for NO2). Multiple codes can be comma-separated (max 5).
bdateYesBegin date in YYYYMMDD format (e.g., 20230101).
edateYesEnd date in YYYYMMDD format (e.g., 20230131). Must be in the same calendar year as bdate.
stateYesTwo-digit state FIPS code (e.g., 06 for California, 36 for New York).
countyYesThree-digit county FIPS code (e.g., 037 for Los Angeles County).
siteYesFour-digit site number within the county.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that authentication (email and key) is required but can be set via environment variables, which is useful context. However, it does not mention rate limits, pagination, error handling, or the format of returned data (e.g., JSON structure). The description adds some behavioral context but lacks details on operational traits like performance or output specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by details on data included and requirements. It uses three concise sentences with zero waste, efficiently covering key aspects without redundancy. Each sentence earns its place by adding necessary context or constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (8 parameters, no annotations, no output schema), the description is reasonably complete. It explains what the tool does, what data it returns, and key requirements. However, without an output schema, it does not describe the return format (e.g., JSON structure, fields), which could be a gap for an agent invoking the tool. The high schema coverage mitigates some of this, but more output context would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 8 parameters thoroughly. The description adds minimal value beyond the schema by mentioning the requirement for state, county, and site FIPS codes, but does not provide additional syntax, examples, or constraints not already in the schema descriptions. Baseline score of 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'daily summary air quality data for a specific monitoring site', specifying the content (arithmetic mean, maximum values, observation counts, AQI values). It distinguishes from siblings by focusing on 'daily summary' and 'by site' rather than annual/quarterly summaries or other geographic aggregations like by box, CBSA, county, or state.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for daily summary data at a specific site. It implies alternatives by mentioning 'daily summary' (vs. annual, quarterly, sample data) and 'by site' (vs. by box, CBSA, county, state), but does not explicitly name when-not scenarios or specific sibling tools. The requirement for FIPS codes and site number helps identify the correct use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tyson-swetnam/aqs-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server