Skip to main content
Glama
tyson-swetnam

EPA Air Quality System (AQS) MCP Server

aqs_sample_data_by_cbsa

Retrieve air quality sample measurements from all monitoring sites within a Core Based Statistical Area (CBSA) for specified parameters and date ranges.

Instructions

Get raw sample data for all monitoring sites in a Core Based Statistical Area (CBSA). WARNING: Sample data can be very large for metropolitan areas. Strongly recommend limiting date ranges to one week or one month. Returns individual sample measurements from all sites in the specified CBSA.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
emailNoEmail address for API authentication. Optional if AQS_EMAIL environment variable is set.
keyNoAPI key for authentication. Optional if AQS_API_KEY environment variable is set.
paramYesParameter code (e.g., 44201 for Ozone, 88101 for PM2.5, 42401 for SO2, 42101 for CO, 42602 for NO2, 81102 for PM10).
bdateYesBegin date in YYYYMMDD format. Must be in the same calendar year as edate.
edateYesEnd date in YYYYMMDD format. Must be in the same calendar year as bdate.
cbsaYesCore Based Statistical Area code (e.g., "31080" for Los Angeles-Long Beach-Anaheim).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively warns about potential data volume issues ('can be very large for metropolitan areas') and provides practical guidance on date range limitations. However, it doesn't mention authentication requirements, rate limits, or error handling, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three sentences: purpose statement, warning with recommendation, and return value clarification. Every sentence adds value - the first establishes scope, the second provides critical usage guidance, and the third distinguishes output type. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a data retrieval tool with 6 parameters and no output schema, the description provides good contextual completeness. It covers the purpose, scope, data volume considerations, and output type. The main gap is the lack of output format description (structure of returned sample measurements), which would be helpful given no output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, providing complete parameter documentation in the structured schema. The description adds minimal parameter semantics beyond the schema - it mentions CBSA and date ranges in the context of data volume warnings, but doesn't provide additional parameter meaning or usage examples beyond what's already in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'raw sample data for all monitoring sites in a Core Based Statistical Area (CBSA)', specifying both the action and scope. It distinguishes from siblings like aqs_sample_data_by_site or aqs_sample_data_by_state by explicitly mentioning CBSA scope, and from summary tools by emphasizing 'raw sample data' versus aggregated summaries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance with a WARNING about data size and a strong recommendation to limit date ranges to one week or one month. It distinguishes this tool from summary tools by specifying it returns 'individual sample measurements' rather than aggregated data, helping users choose between raw sample vs summary tools in the sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tyson-swetnam/aqs-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server