NWS Weather Alerts
Server Details
Active weather alerts and warnings from the National Weather Service
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
3 toolsget_alert_detailAInspect
Get full details of a specific NWS weather alert by its ID.
Returns the complete alert including description, instructions, and
affected areas. Use alert IDs from get_alerts results.
Args:
alert_id: The full NWS alert ID (e.g. 'urn:oid:2.49.0.1.840.0.xxx').| Name | Required | Description | Default |
|---|---|---|---|
| alert_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses what content is returned (description, instructions, affected areas) beyond what the output schema alone would indicate, with no annotation contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise with three information-dense sentences front-loaded with purpose; Args section efficiently delivers critical parameter semantics without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool with output schema; covers the dependency on get_alerts and previews return content, though omits error handling scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Completely compensates for 0% schema description coverage by providing the alert_id format and a concrete OID example that clarifies the expected input structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states specific action (get full details), resource (NWS weather alert), and key requirement (by ID), while explicitly distinguishing from sibling get_alerts via the cross-reference instruction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states to use alert IDs from get_alerts results, establishing clear workflow context, though lacks explicit 'when not to use' or contrast with get_forecast.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_alertsAInspect
Get active NWS weather alerts for a US state.
Returns current weather alerts including watches, warnings, and advisories
issued by the National Weather Service.
Args:
state: Two-letter US state abbreviation (e.g. 'CA', 'TX', 'NY').
severity: Filter by severity level: 'Extreme', 'Severe', 'Moderate', or 'Minor'.
event: Filter by event type (e.g. 'Tornado Warning', 'Flash Flood Watch').
limit: Maximum number of alerts to return (default 25, max 500).| Name | Required | Description | Default |
|---|---|---|---|
| event | No | ||
| limit | No | ||
| state | Yes | ||
| severity | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses 'active' filter (non-historical), alert types returned, and limit constraints (default 25, max 500) without annotations present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded purpose, compact Args section with no redundancy; docstring format is scannable and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for 4-parameter tool; acknowledges output generally (sufficient given output schema exists) and covers all inputs thoroughly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Perfectly compensates for 0% schema coverage: provides formats (e.g., 'CA'), enums ('Extreme', 'Severe'), examples ('Tornado Warning'), and constraints for every parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Get active') and resource ('NWS weather alerts'), with 'for a US state' distinguishing it from get_alert_detail (likely single alert) though lacks explicit contrast.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through 'active' and return description (watches/warnings), but lacks explicit when-to-use vs get_forecast or get_alert_detail.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_forecastAInspect
Get the weather forecast for a specific latitude/longitude location.
Returns a multi-day forecast from the National Weather Service. Works for
any location within the United States and its territories.
Args:
latitude: Latitude of the location (e.g. 38.8894 for Washington DC).
longitude: Longitude of the location (e.g. -77.0352 for Washington DC).| Name | Required | Description | Default |
|---|---|---|---|
| latitude | Yes | ||
| longitude | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses data source (National Weather Service) and temporal scope (multi-day) despite no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose first, then constraints, then parameter details; no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Sufficient for tool complexity; mentions multi-day output and geographic limits without needing to detail return values (output schema exists).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Provides concrete examples (38.8894 for DC) compensating for 0% schema description coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb (Get) + resource (weather forecast) + constraint (latitude/longitude), clearly distinguishes from sibling alert tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides geographic constraint (US only) but lacks explicit comparison to sibling tools (forecast vs alerts).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!