smithery-ai-national-weather-service
Server Details
Provide real-time and forecast weather information for locations in the United States using natura…
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- smithery-ai/mcp-servers
- GitHub Stars
- 95
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
6 toolsfind_weather_stationsAInspect
Find weather observation stations near a location in the United States. Useful for getting station-specific data, finding data sources, or understanding which stations provide weather data for an area. Includes ASOS, AWOS, and other automated weather stations.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of stations to return (1-20, default 10). Stations are returned ordered by distance from the specified location. | |
| location | Yes | US location as coordinates (lat,lng) in decimal degrees. Example: '40.7128,-74.0060' for New York City. Must be within US boundaries including states, territories (PR, VI, AS, GU, MP), and coastal waters. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry full behavioral disclosure. It successfully adds context about what station types are included (ASOS, AWOS, automated stations) and geographic boundaries (US including territories/coastal waters). However, it fails to confirm the read-only/safe nature of the operation or describe what data structure gets returned, which is critical information given the lack of output schema or annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three efficiently structured sentences: the first establishes core purpose, the second provides usage context, and the third specifies station types. Every sentence earns its place without redundancy or verbosity; the information is front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description should ideally explain what constitutes a 'station' in the results (IDs, names, distances, etc.), but it omits this. The parameter documentation is complete thanks to the comprehensive schema, and the geographic/station-type coverage is adequate. However, the missing return value documentation creates a gap for an agent trying to predict the output format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (both location and limit are well-documented in the schema). The description reinforces the location concept ('near a location') but does not add semantic detail beyond what the schema already provides. With high schema coverage, the baseline score of 3 is appropriate—the description meets but does not exceed the schema's explanatory value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Find[s] weather observation stations near a location in the United States'—specific verb (find), resource (weather observation stations), and geographic scope (US). It effectively distinguishes from siblings like get_current_weather by emphasizing station discovery/metadata rather than weather data retrieval, and specifies station types (ASOS, AWOS) to clarify scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides positive usage guidance ('Useful for getting station-specific data...') suggesting when to use it, but lacks explicit negative guidance or sibling differentiation. It does not clarify when NOT to use this (e.g., 'do not use for current weather readings, use get_current_weather instead'), leaving the agent to infer the distinction from the purpose statement alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_current_weatherAInspect
Get current weather conditions for a location in the United States. Perfect for 'What's the weather like in [US location]?' questions. Covers all US states, territories, and coastal waters.
| Name | Required | Description | Default |
|---|---|---|---|
| location | Yes | US location as coordinates (lat,lng) in decimal degrees. Example: '40.7128,-74.0060' for New York City. Must be within US boundaries including states, territories (PR, VI, AS, GU, MP), and coastal waters. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but only mentions geographic coverage. It omits critical details: data freshness (real-time vs cached), returned units (imperial/metric), error behavior for out-of-bounds coordinates, and what specific weather fields are returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose statement, example use case, and coverage clarification. Information is front-loaded with the core action in the first sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool, but given the lack of output schema, the description misses the opportunity to describe what weather data fields (temperature, humidity, wind, etc.) the tool returns. Functional but incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, fully documenting the coordinate format and US boundaries constraint. The description does not add parameter-specific semantics beyond the schema, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Get'), resource ('current weather conditions'), and geographic scope ('United States'). The term 'current' effectively distinguishes this tool from siblings like 'get_weather_forecast' and 'get_hourly_forecast'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a helpful example question format ('What's the weather like in...'), implying natural language queries. However, it lacks explicit guidance on when to use this versus forecast-oriented siblings or warnings about the coordinate format requirement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_hourly_forecastAInspect
Get hour-by-hour weather forecast for a location in the United States. Perfect for 'What's the hourly forecast?' or 'Will it rain this afternoon in [US location]?' questions. Provides detailed hourly conditions for up to 48 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| hours | No | Number of hours to forecast (1-48, default 24). Provides temperature, conditions, and precipitation probability for each hour. | |
| location | Yes | US location as coordinates (lat,lng) in decimal degrees. Example: '40.7128,-74.0060' for New York City. Must be within US boundaries including states, territories (PR, VI, AS, GU, MP), and coastal waters. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It effectively communicates geographic restriction (US only) and temporal limits (up to 48 hours), but omits error handling behavior, rate limits, and data freshness guarantees.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose statement, usage examples, and behavioral scope. Information is front-loaded with the core action, making it immediately scannable for an agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a low-complexity tool (2 primitive parameters, no nesting), the description adequately compensates for missing output schema by stating it 'Provides detailed hourly conditions.' Coverage of US boundary constraints and 48-hour limits provides sufficient context for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with detailed examples (coordinate format, hour ranges). Description reinforces these constraints ('up to 48 hours', 'United States') but adds no additional semantic depth beyond what the schema already provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb-resource combination ('Get hour-by-hour weather forecast') and clearly scopes to US locations. The 'hour-by-hour' granularity explicitly distinguishes this from siblings like get_current_weather and get_weather_forecast.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context through example questions ('What's the hourly forecast?', 'Will it rain this afternoon...'), effectively signaling when to use the tool. Lacks explicit negative guidance or named sibling alternatives (e.g., 'use get_weather_forecast for daily summaries').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_local_timeBInspect
Get the current local time for a US location. Shows what time it is right now at the specified location.
| Name | Required | Description | Default |
|---|---|---|---|
| location | Yes | US location as coordinates (lat,lng) in decimal degrees. Example: '40.7128,-74.0060' for New York City. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden of behavioral disclosure. States it retrieves current time but fails to specify return format (ISO string? Unix timestamp? Object?), timezone handling (DST?), or error behavior for invalid coordinates. Minimal disclosure for a tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences, front-loaded with core purpose. Minor redundancy between 'current local time' and 'what time it is right now,' but otherwise efficient without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has low complexity (1 param, simple concept) and good schema coverage, but lacks output schema. Description omits return value format which would help the agent understand what to expect from the call. Adequate but incomplete given the missing output specification.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for the single parameter, documenting the coordinate format and providing an example. Description mentions 'location' generically but does not add semantic value beyond what the schema already provides (e.g., doesn't clarify that it must be coordinates rather than city names). Baseline 3 appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Get') + resource ('local time') + scope ('US location'). Clearly distinguishes from weather-focused siblings (get_current_weather, get_weather_forecast, etc.) by explicitly targeting time rather than meteorological data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context by restricting to 'US location' twice, establishing geographic scope. However, lacks explicit when-to-use guidance relative to weather siblings or alternative time-lookup methods, and omits exclusions (e.g., what happens with non-US coordinates).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_weather_alertsAInspect
Get active weather alerts, warnings, watches, and advisories for locations in the United States. Perfect for 'Are there any weather alerts in [US location]?' questions. Covers severe weather, winter storms, heat warnings, flood alerts, and more.
| Name | Required | Description | Default |
|---|---|---|---|
| location | Yes | US location as coordinates (lat,lng) in decimal degrees OR 2-letter state/territory code. Examples: '40.7128,-74.0060' for New York City, 'CA' for California, 'PR' for Puerto Rico. Valid state codes: AL, AK, AS, AR, AZ, CA, CO, CT, DE, DC, FL, GA, GU, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, PR, RI, SC, SD, TN, TX, UT, VT, VI, VA, WA, WV, WI, WY, MP, PW, FM, MH. | |
| severity | No | Filter by alert severity: 'extreme' (life-threatening), 'severe' (significant threat), 'moderate' (possible threat), 'minor' (minimal threat), or 'all' (default - shows all active alerts). | all |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Adds valuable context about covered alert types ('severe weather, winter storms, heat warnings, flood alerts'), but omits operational details like real-time latency, behavior when no alerts exist, or data source freshness that would be necessary for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence 1 states purpose, sentence 2 provides usage context, sentence 3 specifies coverage scope. Front-loaded with the core action. Every sentence earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple 2-parameter schema with 100% coverage and no output schema, description adequately covers geographic constraints (US-only), use case patterns, and alert categories. Could mention the severity filtering capability, though this is well-handled by the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters fully documented (location includes format examples and valid codes; severity includes enum descriptions). Description implies the location parameter through '[US location]' reference but adds no semantic detail beyond what the schema already provides. Baseline 3 appropriate for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Get' and clearly identifies resource (active weather alerts, warnings, watches, and advisories) and scope (United States). Effectively distinguishes from sibling forecast tools like get_weather_forecast and get_current_weather by focusing specifically on alerts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear positive usage example: 'Perfect for Are there any weather alerts in [US location]? questions.' This gives the agent a strong trigger pattern. However, lacks explicit contrast with alternatives (e.g., when to use get_current_weather vs this tool) or negative constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_weather_forecastAInspect
Get multi-day weather forecast for a location in the United States. Perfect for 'What's the forecast for [US location]?' questions. Provides detailed day/night forecasts for up to 7 days.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | Number of days to forecast (1-7, default 7). Each day includes both day and night periods. | |
| location | Yes | US location as coordinates (lat,lng) in decimal degrees. Example: '40.7128,-74.0060' for New York City. Must be within US boundaries including states, territories (PR, VI, AS, GU, MP), and coastal waters. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses geographic constraints (US only) and temporal limits (up to 7 days), but omits mutation safety (implied by 'Get'), rate limits, or error behavior for out-of-bounds coordinates.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence 1 establishes purpose and scope, sentence 2 provides usage context, and sentence 3 clarifies output granularity. Information is front-loaded with the action verb.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool with 100% schema coverage and simple types, the description is sufficiently complete. It partially compensates for the missing output schema by mentioning 'detailed day/night forecasts,' though it could specify the return structure (e.g., JSON with temperature/precipitation fields).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description reinforces the 'days' parameter ('up to 7 days') and 'location' parameter ('US location') but does not add syntax details, validation rules, or examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and resource ('multi-day weather forecast'), explicitly scopes the tool to US locations, and distinguishes it from siblings like get_current_weather and get_hourly_forecast by emphasizing 'day/night' forecasts 'up to 7 days'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Perfect for "What's the forecast for [US location]?" questions' provides a clear query pattern for selection. While it does not explicitly name sibling alternatives, the 'multi-day' and 'day/night' phrasing implicitly signals when to use this over hourly or current weather tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!