NOAA Climate Data
Server Details
Historical climate data, temperatures, precipitation, and normals
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 3 of 3 tools scored.
Each tool has a clearly distinct purpose: find_stations locates stations, get_climate_data retrieves historical observations, and get_climate_normals provides 30-year averages. There is no overlap in functionality, and the descriptions explicitly guide users on how to chain them together (e.g., using station IDs from find_stations with the other tools).
All tool names follow a consistent verb_noun pattern with snake_case: find_stations, get_climate_data, and get_climate_normals. The verbs 'find' and 'get' are appropriately chosen for their actions, and the naming is predictable and readable throughout the set.
With 3 tools, the count is reasonable for a climate data server, covering core workflows of station lookup, data retrieval, and normals. However, it feels slightly thin as it lacks tools for broader queries (e.g., by region without FIPS) or data manipulation, but each tool earns its place without redundancy.
The tools provide a solid foundation for accessing NOAA climate data, with clear progression from station discovery to data and normals retrieval. Minor gaps exist, such as no direct tools for metadata queries (e.g., dataset lists) or spatial searches beyond state/county, but agents can work around these using the provided tools effectively.
Available Tools
3 toolsfind_stationsAInspect
Find NOAA weather stations in an area.
Returns a list of weather stations with their IDs, names, coordinates,
and active date ranges. Use the station IDs with get_climate_data.
Args:
state: Two-letter US state abbreviation (e.g. 'CA', 'NY').
county_fips: Five-digit county FIPS code (e.g. '36061' for Manhattan).
dataset: Dataset ID to filter stations that have data in this dataset.
Default is 'GHCND' (daily summaries).
limit: Maximum number of stations to return (default 25, max 1000).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| state | No | ||
| dataset | No | GHCND | |
| county_fips | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes return values (IDs, names, coordinates, date ranges) and dataset default, but lacks disclosure of error behavior (e.g., empty results), rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured docstring format with purpose front-loaded, followed by return value, usage guidance, and parameter details; no redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero schema descriptions and output schema existence, coverage is strong; minor gap regarding filter logic (AND vs OR when both state and county_fips provided) and required parameter guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Comprehensive compensation for 0% schema description coverage via detailed Args section with formats, examples ('CA', '36061'), and default value explanations for all 4 parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Find') and resource ('NOAA weather stations') with explicit differentiation from siblings via 'Use the station IDs with get_climate_data', though 'in an area' is slightly vague until Args section clarifies geographic filters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the workflow relationship to sibling tool ('Use the station IDs with get_climate_data'), clarifying this is a discovery step preceding data retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_climate_dataAInspect
Get climate observations from NOAA Climate Data Online.
Returns historical weather measurements such as temperature, precipitation,
and snowfall. You must provide either a station_id or a FIPS code to
identify the location, and a date range.
Args:
station_id: NOAA station identifier (e.g. 'GHCND:USW00094728' for Central Park).
fips: FIPS code for county or state (e.g. '36' for New York state, '36061' for Manhattan).
dataset: Dataset ID. Common values: 'GHCND' (daily summaries), 'GSOM' (monthly),
'GSOY' (annual), 'NORMAL_DLY' (daily normals). Default is 'GHCND'.
start_date: Start date in YYYY-MM-DD format. Required for most datasets.
end_date: End date in YYYY-MM-DD format. Required for most datasets.
data_types: Comma-separated data type IDs to filter. Common types:
TMAX (max temp), TMIN (min temp), TAVG (avg temp),
PRCP (precipitation), SNOW (snowfall), SNWD (snow depth),
AWND (avg wind speed). If omitted, all available types are returned.
limit: Maximum number of records to return (default 100, max 1000).| Name | Required | Description | Default |
|---|---|---|---|
| fips | No | ||
| limit | No | ||
| dataset | No | GHCND | |
| end_date | No | ||
| data_types | No | ||
| start_date | No | ||
| station_id | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description carries burden well: explains defaults (limit=100, dataset=GHCND), max constraints (1000), omission behavior (all data types returned if unspecified), and date requirements for most datasets.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose and return value, followed by constraints, then structured Args block; every sentence adds value beyond the schema title fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage of all 7 parameters, location identification logic, dataset options, and filtering capabilities; output schema exists so return value detail is appropriately minimal.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage; description fully compensates with detailed Args section including examples (station ID format, FIPS codes), enum expansions (GHCND=daily summaries), and data type mappings (TMAX=max temp).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' + resource 'climate observations' from specific source 'NOAA', distinguishes from sibling 'get_climate_normals' by specifying 'observations' (measured data) vs 'normals' (averages).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides constraint logic (station_id OR FIPS required) but lacks explicit guidance on when to choose this over 'get_climate_normals' (historical measurements vs. climate averages) or 'find_stations'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_climate_normalsAInspect
Get 30-year climate normal values for a NOAA weather station.
Climate normals are averages computed over the most recent 30-year period
(currently 1991-2020). They represent typical conditions for a location
and are useful for comparing current conditions to historical baselines.
Args:
station_id: NOAA station identifier (e.g. 'GHCND:USW00094728').
Use find_stations to look up station IDs.
data_types: Comma-separated normal data type IDs to filter. Common types:
DLY-TMAX-NORMAL (avg daily max temp), DLY-TMIN-NORMAL (avg daily min temp),
DLY-TAVG-NORMAL (avg daily temp), DLY-PRCP-PCTALL-GE001HI (precip probability),
MTD-PRCP-NORMAL (monthly precip), ANN-TMAX-NORMAL (annual max temp).
If omitted, all available normals are returned.| Name | Required | Description | Default |
|---|---|---|---|
| data_types | No | ||
| station_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description carries full burden by disclosing the specific 30-year period (1991-2020), explaining default return behavior when data_types is omitted, and clarifying the averaging methodology.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear Args section, front-loaded purpose statement, and efficient information density; only minor deduction for dense formatting of data_types examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive given context: parameters fully documented, output schema exists (so return values need no explanation), and temporal scope is disclosed; lacks only operational details like rate limits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage; description fully compensates with detailed semantics including station_id format/example and extensive data_types enumeration with human-readable explanations of each code.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific purpose ('Get 30-year climate normal values'), explains what normals represent (1991-2020 averages), and implicitly distinguishes from siblings by noting use for 'comparing current conditions to historical baselines' vs current data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit cross-tool guidance ('Use find_stations to look up station IDs') and explains the analytical use case (historical baselines), though lacks explicit 'when not to use' exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!