Skip to main content
Glama

smithery-ai-national-weather-service

Server Details

Provide real-time and forecast weather information for locations in the United States using natura…

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
smithery-ai/mcp-servers
GitHub Stars
95

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

6 tools
find_weather_stationsAInspect

Find weather observation stations near a location in the United States. Useful for getting station-specific data, finding data sources, or understanding which stations provide weather data for an area. Includes ASOS, AWOS, and other automated weather stations.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of stations to return (1-20, default 10). Stations are returned ordered by distance from the specified location.
locationYesUS location as coordinates (lat,lng) in decimal degrees. Example: '40.7128,-74.0060' for New York City. Must be within US boundaries including states, territories (PR, VI, AS, GU, MP), and coastal waters.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry full behavioral disclosure. It successfully adds context about what station types are included (ASOS, AWOS, automated stations) and geographic boundaries (US including territories/coastal waters). However, it fails to confirm the read-only/safe nature of the operation or describe what data structure gets returned, which is critical information given the lack of output schema or annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three efficiently structured sentences: the first establishes core purpose, the second provides usage context, and the third specifies station types. Every sentence earns its place without redundancy or verbosity; the information is front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description should ideally explain what constitutes a 'station' in the results (IDs, names, distances, etc.), but it omits this. The parameter documentation is complete thanks to the comprehensive schema, and the geographic/station-type coverage is adequate. However, the missing return value documentation creates a gap for an agent trying to predict the output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (both location and limit are well-documented in the schema). The description reinforces the location concept ('near a location') but does not add semantic detail beyond what the schema already provides. With high schema coverage, the baseline score of 3 is appropriate—the description meets but does not exceed the schema's explanatory value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Find[s] weather observation stations near a location in the United States'—specific verb (find), resource (weather observation stations), and geographic scope (US). It effectively distinguishes from siblings like get_current_weather by emphasizing station discovery/metadata rather than weather data retrieval, and specifies station types (ASOS, AWOS) to clarify scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides positive usage guidance ('Useful for getting station-specific data...') suggesting when to use it, but lacks explicit negative guidance or sibling differentiation. It does not clarify when NOT to use this (e.g., 'do not use for current weather readings, use get_current_weather instead'), leaving the agent to infer the distinction from the purpose statement alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_current_weatherAInspect

Get current weather conditions for a location in the United States. Perfect for 'What's the weather like in [US location]?' questions. Covers all US states, territories, and coastal waters.

ParametersJSON Schema
NameRequiredDescriptionDefault
locationYesUS location as coordinates (lat,lng) in decimal degrees. Example: '40.7128,-74.0060' for New York City. Must be within US boundaries including states, territories (PR, VI, AS, GU, MP), and coastal waters.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but only mentions geographic coverage. It omits critical details: data freshness (real-time vs cached), returned units (imperial/metric), error behavior for out-of-bounds coordinates, and what specific weather fields are returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose statement, example use case, and coverage clarification. Information is front-loaded with the core action in the first sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter tool, but given the lack of output schema, the description misses the opportunity to describe what weather data fields (temperature, humidity, wind, etc.) the tool returns. Functional but incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, fully documenting the coordinate format and US boundaries constraint. The description does not add parameter-specific semantics beyond the schema, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Get'), resource ('current weather conditions'), and geographic scope ('United States'). The term 'current' effectively distinguishes this tool from siblings like 'get_weather_forecast' and 'get_hourly_forecast'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a helpful example question format ('What's the weather like in...'), implying natural language queries. However, it lacks explicit guidance on when to use this versus forecast-oriented siblings or warnings about the coordinate format requirement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_hourly_forecastAInspect

Get hour-by-hour weather forecast for a location in the United States. Perfect for 'What's the hourly forecast?' or 'Will it rain this afternoon in [US location]?' questions. Provides detailed hourly conditions for up to 48 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursNoNumber of hours to forecast (1-48, default 24). Provides temperature, conditions, and precipitation probability for each hour.
locationYesUS location as coordinates (lat,lng) in decimal degrees. Example: '40.7128,-74.0060' for New York City. Must be within US boundaries including states, territories (PR, VI, AS, GU, MP), and coastal waters.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It effectively communicates geographic restriction (US only) and temporal limits (up to 48 hours), but omits error handling behavior, rate limits, and data freshness guarantees.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose statement, usage examples, and behavioral scope. Information is front-loaded with the core action, making it immediately scannable for an agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a low-complexity tool (2 primitive parameters, no nesting), the description adequately compensates for missing output schema by stating it 'Provides detailed hourly conditions.' Coverage of US boundary constraints and 48-hour limits provides sufficient context for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with detailed examples (coordinate format, hour ranges). Description reinforces these constraints ('up to 48 hours', 'United States') but adds no additional semantic depth beyond what the schema already provides, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb-resource combination ('Get hour-by-hour weather forecast') and clearly scopes to US locations. The 'hour-by-hour' granularity explicitly distinguishes this from siblings like get_current_weather and get_weather_forecast.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage context through example questions ('What's the hourly forecast?', 'Will it rain this afternoon...'), effectively signaling when to use the tool. Lacks explicit negative guidance or named sibling alternatives (e.g., 'use get_weather_forecast for daily summaries').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_local_timeBInspect

Get the current local time for a US location. Shows what time it is right now at the specified location.

ParametersJSON Schema
NameRequiredDescriptionDefault
locationYesUS location as coordinates (lat,lng) in decimal degrees. Example: '40.7128,-74.0060' for New York City.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden of behavioral disclosure. States it retrieves current time but fails to specify return format (ISO string? Unix timestamp? Object?), timezone handling (DST?), or error behavior for invalid coordinates. Minimal disclosure for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short sentences, front-loaded with core purpose. Minor redundancy between 'current local time' and 'what time it is right now,' but otherwise efficient without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has low complexity (1 param, simple concept) and good schema coverage, but lacks output schema. Description omits return value format which would help the agent understand what to expect from the call. Adequate but incomplete given the missing output specification.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the single parameter, documenting the coordinate format and providing an example. Description mentions 'location' generically but does not add semantic value beyond what the schema already provides (e.g., doesn't clarify that it must be coordinates rather than city names). Baseline 3 appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Get') + resource ('local time') + scope ('US location'). Clearly distinguishes from weather-focused siblings (get_current_weather, get_weather_forecast, etc.) by explicitly targeting time rather than meteorological data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context by restricting to 'US location' twice, establishing geographic scope. However, lacks explicit when-to-use guidance relative to weather siblings or alternative time-lookup methods, and omits exclusions (e.g., what happens with non-US coordinates).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_weather_alertsAInspect

Get active weather alerts, warnings, watches, and advisories for locations in the United States. Perfect for 'Are there any weather alerts in [US location]?' questions. Covers severe weather, winter storms, heat warnings, flood alerts, and more.

ParametersJSON Schema
NameRequiredDescriptionDefault
locationYesUS location as coordinates (lat,lng) in decimal degrees OR 2-letter state/territory code. Examples: '40.7128,-74.0060' for New York City, 'CA' for California, 'PR' for Puerto Rico. Valid state codes: AL, AK, AS, AR, AZ, CA, CO, CT, DE, DC, FL, GA, GU, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, PR, RI, SC, SD, TN, TX, UT, VT, VI, VA, WA, WV, WI, WY, MP, PW, FM, MH.
severityNoFilter by alert severity: 'extreme' (life-threatening), 'severe' (significant threat), 'moderate' (possible threat), 'minor' (minimal threat), or 'all' (default - shows all active alerts).all
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. Adds valuable context about covered alert types ('severe weather, winter storms, heat warnings, flood alerts'), but omits operational details like real-time latency, behavior when no alerts exist, or data source freshness that would be necessary for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: sentence 1 states purpose, sentence 2 provides usage context, sentence 3 specifies coverage scope. Front-loaded with the core action. Every sentence earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple 2-parameter schema with 100% coverage and no output schema, description adequately covers geographic constraints (US-only), use case patterns, and alert categories. Could mention the severity filtering capability, though this is well-handled by the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both parameters fully documented (location includes format examples and valid codes; severity includes enum descriptions). Description implies the location parameter through '[US location]' reference but adds no semantic detail beyond what the schema already provides. Baseline 3 appropriate for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb 'Get' and clearly identifies resource (active weather alerts, warnings, watches, and advisories) and scope (United States). Effectively distinguishes from sibling forecast tools like get_weather_forecast and get_current_weather by focusing specifically on alerts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear positive usage example: 'Perfect for Are there any weather alerts in [US location]? questions.' This gives the agent a strong trigger pattern. However, lacks explicit contrast with alternatives (e.g., when to use get_current_weather vs this tool) or negative constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_weather_forecastAInspect

Get multi-day weather forecast for a location in the United States. Perfect for 'What's the forecast for [US location]?' questions. Provides detailed day/night forecasts for up to 7 days.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days to forecast (1-7, default 7). Each day includes both day and night periods.
locationYesUS location as coordinates (lat,lng) in decimal degrees. Example: '40.7128,-74.0060' for New York City. Must be within US boundaries including states, territories (PR, VI, AS, GU, MP), and coastal waters.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses geographic constraints (US only) and temporal limits (up to 7 days), but omits mutation safety (implied by 'Get'), rate limits, or error behavior for out-of-bounds coordinates.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: sentence 1 establishes purpose and scope, sentence 2 provides usage context, and sentence 3 clarifies output granularity. Information is front-loaded with the action verb.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool with 100% schema coverage and simple types, the description is sufficiently complete. It partially compensates for the missing output schema by mentioning 'detailed day/night forecasts,' though it could specify the return structure (e.g., JSON with temperature/precipitation fields).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description reinforces the 'days' parameter ('up to 7 days') and 'location' parameter ('US location') but does not add syntax details, validation rules, or examples beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Get') and resource ('multi-day weather forecast'), explicitly scopes the tool to US locations, and distinguishes it from siblings like get_current_weather and get_hourly_forecast by emphasizing 'day/night' forecasts 'up to 7 days'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'Perfect for "What's the forecast for [US location]?" questions' provides a clear query pattern for selection. While it does not explicitly name sibling alternatives, the 'multi-day' and 'day/night' phrasing implicitly signals when to use this over hourly or current weather tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.