Skip to main content
Glama
tyson-swetnam

EPA Air Quality System (AQS) MCP Server

aqs_sample_data_by_county

Retrieve air quality sample measurements from all monitoring sites in a specified U.S. county. Use this tool to access raw pollution data for analysis by providing parameter codes, date ranges, and location identifiers.

Instructions

Get raw sample data for all monitoring sites in a county. WARNING: Sample data can be very large. Strongly recommend limiting date ranges to one week or one month. Returns individual sample measurements from all sites in the specified county.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
emailNoEmail address for API authentication. Optional if AQS_EMAIL environment variable is set.
keyNoAPI key for authentication. Optional if AQS_API_KEY environment variable is set.
paramYesParameter code (e.g., 44201 for Ozone, 88101 for PM2.5, 42401 for SO2, 42101 for CO, 42602 for NO2, 81102 for PM10).
bdateYesBegin date in YYYYMMDD format. Must be in the same calendar year as edate.
edateYesEnd date in YYYYMMDD format. Must be in the same calendar year as bdate.
stateYesTwo-digit FIPS state code (e.g., "06" for California, "36" for New York).
countyYesThree-digit FIPS county code (e.g., "037" for Los Angeles County).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing critical behavioral traits: the warning about potentially very large data returns, the recommendation to limit date ranges for practical use, and the authentication context (though authentication details are also in the schema). It doesn't mention rate limits or error handling, but covers the most important operational consideration.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured with three sentences: purpose statement, critical warning with usage recommendation, and clarification of return type. Every sentence earns its place with no wasted words, and the warning is appropriately front-loaded for user awareness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a data retrieval tool with no annotations and no output schema, the description provides excellent context about what the tool returns ('individual sample measurements from all sites') and critical operational guidance. The main gap is lack of information about return format/structure, but given the complexity level and schema coverage, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description adds no additional parameter information beyond what's in the schema, maintaining the baseline score of 3 for adequate but not enhanced parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get raw sample data') and resource ('for all monitoring sites in a county'), making the purpose specific and actionable. It distinguishes from siblings like aqs_annual_summary_by_county by emphasizing 'raw sample data' and 'individual sample measurements' rather than aggregated summaries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool with the WARNING about data size and recommendation to limit date ranges to one week or one month. It implicitly distinguishes from summary tools by specifying it returns 'individual sample measurements' rather than aggregated data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tyson-swetnam/aqs-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server