Skip to main content
Glama

Server Details

Swiss weather data for AI assistants — forecasts, measurements, stations, pollen.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
eins78/meteoswiss-llm-tools
GitHub Stars
1
Server Listing
MeteoSwiss MCP Server

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: fetch retrieves webpage content, meteoswissClimateData provides historical climate series, meteoswissCurrentWeather gives real-time measurements, meteoswissLocalForecast offers multi-day forecasts, meteoswissPollenData shows pollen concentrations, meteoswissStations lists station metadata, and search finds webpages. There is no overlap in functionality, making tool selection straightforward for an agent.

Naming Consistency4/5

Most tools follow a consistent 'meteoswiss' prefix with descriptive suffixes (e.g., ClimateData, CurrentWeather, LocalForecast), but 'fetch' and 'search' deviate from this pattern. The naming is still readable and logical, with only minor inconsistencies that do not hinder usability.

Tool Count5/5

With 7 tools, the server is well-scoped for providing weather and climate data from MeteoSwiss. Each tool serves a specific, essential function (e.g., historical data, current conditions, forecasts, pollen info, station search), and none feel redundant or missing, making the count appropriate for the domain.

Completeness5/5

The tool set offers comprehensive coverage for accessing MeteoSwiss data: it includes historical climate data, real-time weather, forecasts, pollen information, station metadata, and web content retrieval. There are no obvious gaps; agents can handle typical queries like temperature trends, current conditions, or allergy alerts without dead ends.

Available Tools

7 tools
fetchAInspect

Fetch full content from a MeteoSwiss webpage and convert to markdown or plain text. Use the search tool first to discover valid page URLs, then pass the full URL as the id parameter.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesIdentifier of a MeteoSwiss page to fetch. For this server the id is a full URL returned by the search tool. Example: https://www.meteoschweiz.admin.ch/klima/klimawandel/steigende-temperaturen.html
formatNoThe output format for the contentmarkdown
includeMetadataNoWhether to include metadata in the response
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the content conversion behavior ('convert to markdown or plain text') but omits other critical behavioral traits such as error handling for invalid URLs, rate limits, content size limits, or whether the operation is idempotent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first establishes purpose and output formats, the second provides the critical usage workflow. Information is front-loaded and every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While no output schema exists, the description indicates the return format (markdown/text) and the schema is fully documented. Lacks explicit description of the response structure (e.g., whether it returns a string or object), but adequately covers the tool's functionality given its straightforward purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description reinforces that 'id' should be a 'full URL' and references the search-first workflow, but this largely duplicates the schema's own description which already includes the example URL and search guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Fetch', 'convert') and identifies the exact resource ('MeteoSwiss webpage') and output formats ('markdown or plain text'). It clearly distinguishes this generic webpage fetcher from weather-data siblings like meteoswissCurrentWeather and meteoswissLocalForecast.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly prescribes the workflow: 'Use the search tool first to discover valid page URLs, then pass the full URL...'. This directly references the sibling 'search' tool and establishes clear prerequisites, preventing errors from guessing URLs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

meteoswissClimateDataAInspect

Get homogeneous climate measurement series from Switzerland's National Basic Climatic Network (NBCN). Returns temperature, precipitation, sunshine, radiation, wind, pressure, and climate indicators (frost days, summer days, heat days) going back decades.

29 climate stations + 46 precipitation stations with daily, monthly, and yearly resolution.

Use cases: "What are typical January temperatures in Zurich?", "How has precipitation changed in Basel over 50 years?", "How many heat days did Lugano have last year?"

Accepts station names ("Zurich", "Basel"), abbreviations ("SMA", "BAS"), or WGS84 coordinates.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of data rows to return (default 30)
stationNoClimate station name or abbreviation (e.g., "Zurich", "BAS", "Davos"). Part of the National Basic Climatic Network (29 climate + 46 precipitation stations).
end_dateNoEnd date filter (YYYY-MM-DD). Only rows on or before this date are returned.
resolutionNoData resolution: daily (temp min/max/mean), monthly (full climate summary), yearly (annual summary)monthly
start_dateNoStart date filter (YYYY-MM-DD). Only rows on or after this date are returned.
coordinatesNoWGS84 coordinates (alternative to station name)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It does well by specifying the data sources (29 climate stations + 46 precipitation stations), temporal scope (data going back decades), and available variables (temperature, precipitation, etc.). However, it doesn't mention rate limits, authentication requirements, or potential data availability constraints for specific stations/time periods.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero wasted sentences. It opens with the core purpose, lists available data types, specifies station counts and resolutions, provides concrete use cases, and explains parameter options - all in a compact format where every sentence adds essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters, 100% schema coverage, but no output schema, the description provides strong context about what data is returned (temperature, precipitation, etc.), station network details, and temporal scope. The main gap is lack of output format details (structure of returned data), but the description compensates well with rich content about data characteristics and use cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds meaningful context by explaining station input options (names, abbreviations, coordinates) and providing concrete examples ('Zurich', 'BAS', 'Davos'), which helps users understand how to properly identify stations beyond what the schema's technical descriptions provide.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'gets homogeneous climate measurement series' from Switzerland's NBCN, specifying the exact resource (climate data) and action (retrieve). It distinguishes from siblings like meteoswissCurrentWeather (current conditions) and meteoswissPollenData (different data type) by focusing on historical climate series with multiple variables and resolutions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage examples ('What are typical January temperatures in Zurich?', 'How has precipitation changed in Basel over 50 years?') that clearly indicate when to use this tool for historical climate analysis versus alternatives like meteoswissCurrentWeather for current conditions. The examples demonstrate the tool's purpose for trend analysis and historical queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

meteoswissCurrentWeatherAInspect

Get real-time weather measurements from ~300 Swiss automatic weather stations (~160 full weather + ~140 precipitation-only). Returns temperature, precipitation, wind, humidity, pressure, sunshine, and more. Data updates every 10 minutes. Precipitation-only stations return only rainfall data.

For 8 stations (Zurich, Basel, Chur, Sion, Altdorf, Säntis, Jungfraujoch, Grand St-Bernard), also includes daily visual observations: cloud cover, fog, rain, snowfall, hail, and snow coverage.

Accepts station names ("Zurich"), abbreviations ("SMA"), addresses ("Bahnhofplatz 1 Bern"), or WGS84 coordinates. Automatically finds the nearest station.

ParametersJSON Schema
NameRequiredDescriptionDefault
stationNoSwiss weather station or location: name (e.g., "Zurich"), abbreviation (e.g., "SMA"), or address (e.g., "Bahnhofplatz 1 Bern")
coordinatesNoWGS84 coordinates (alternative to station name)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and discloses critical behavioral traits: data freshness ('updates every 10 minutes') and geolocation logic ('Automatically finds the nearest station'). It misses error handling or rate limits, but covers the essential operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four tightly constructed sentences with zero waste: sentence 1 establishes purpose, sentence 2 details return values, sentence 3 states update cadence, and sentence 4 covers input formats and resolution behavior. Information is front-loaded effectively.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description adequately compensates by listing specific return data types (temperature, precipitation, wind, etc.) and scope (~160 stations). It lacks explicit error-handling documentation but is otherwise complete for tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage (baseline 3), the description adds concrete example values ('Zurich', 'SMA', 'Bahnhofplatz 1 Bern') and clarifies the relationship between parameters (coordinates are an 'alternative to station name'), enhancing the raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Get') and resource ('real-time weather measurements from ~160 Swiss automatic weather stations'), clearly distinguishing it from siblings like 'meteoswissLocalForecast' (forecast vs. current) and 'meteoswissPollenData' (pollen vs. weather metrics).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through domain terminology ('real-time' vs. siblings with 'forecast' in names) and explains input flexibility, but lacks explicit when-to-use guidance or named alternatives (e.g., it doesn't state 'use meteoswissLocalForecast for future predictions').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

meteoswissLocalForecastAInspect

Get a multi-day weather forecast for any Swiss location. Returns daily summaries with temperature, precipitation, and weather icons.

This uses official MeteoSwiss Open Data — the same forecasts powering the MeteoSwiss app and website.

Accepts:

  • Postal codes: "8001" (Zurich), "3000" (Bern), "1200" (Geneva)

  • Station abbreviations: "ZUE" (Zurich Fluntern), "BER" (Bern)

  • Place names: "Zurich", "Basel", "Lugano"

Coverage: ~6000 Swiss locations (all postal codes + weather stations + mountain points). Forecast horizon: up to 9 days. Updated hourly.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of forecast days (1-9, default 5)
locationYesSwiss location: postal code (e.g., "8001"), station abbreviation (e.g., "ZUE"), or place name (e.g., "Zurich")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and succeeds well: it discloses data source ('official MeteoSwiss Open Data'), update frequency ('Updated hourly'), coverage limits ('~6000 Swiss locations'), and return format ('daily summaries with temperature, precipitation, and weather icons'). Lacks rate limits or error behavior details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure: purpose front-loaded in sentence one, provenance in sentence two, then a bulleted list clarifying input formats, followed by coverage metadata. No wasted words; every sentence provides unique information not redundant with the schema or annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description compensates thoroughly by describing return values ('daily summaries with temperature, precipitation, and weather icons'), input validation constraints (postal code vs. station formats), and operational bounds (9-day limit, hourly updates). Complete for a 2-parameter read-only tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds significant value by providing richer examples for the location parameter than the schema alone: parenthetical city names for postal codes ('8001' (Zurich)), full station names ('ZUE' (Zurich Fluntern)), and additional place examples ('Basel', 'Lugano'). It also reinforces the days constraint via 'Forecast horizon: up to 9 days'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with 'Get a multi-day weather forecast for any Swiss location'—a specific verb, resource, and geographic scope. It clearly distinguishes from sibling tools like meteoswissCurrentWeather (multi-day vs. current) and meteoswissPollenData (weather vs. pollen data) through explicit temporal scope ('multi-day', 'up to 9 days').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for when to use (multi-day forecasts, up to 9 days) and implicitly contrasts with current weather via the name and 'multi-day' phrasing. However, it does not explicitly name sibling alternatives or state when NOT to use it (e.g., 'for current conditions, use meteoswissCurrentWeather instead').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

meteoswissPollenDataAInspect

Get current pollen concentration data from MeteoSwiss monitoring stations (~15 stations across Switzerland). Shows pollen levels by type (birch, grass, etc.). Useful for allergy sufferers.

ParametersJSON Schema
NameRequiredDescriptionDefault
stationNoPollen monitoring station name or abbreviation. Omit for an overview of all stations.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations exist, description carries full burden. It adds valuable behavioral context: temporal scope ('current' data), coverage limitations ('~15 stations'), and data granularity ('pollen levels by type'). However, it omits operational details like rate limits, authentication requirements, or error behaviors that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each earning its place: purpose/scope, data types, and use case. No redundant or filler text. Front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with 100% schema coverage, description adequately covers data content (pollen types) and scope. Lacks explicit return value structure details (no output schema exists), but mentions 'shows pollen levels' giving partial insight into the response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds minimal parameter-specific guidance beyond schema, though '~15 stations across Switzerland' provides useful context for what valid station values might look like. No syntax or format details added beyond schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'Get' + resource 'pollen concentration data' + source 'MeteoSwiss monitoring stations'. Explicitly distinguishes from weather-focused siblings via content (birch, grass pollen) and scope (~15 stations across Switzerland).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context ('Useful for allergy sufferers') but lacks explicit when-to-use/when-not-to-use guidance regarding siblings like meteoswissCurrentWeather or meteoswissLocalForecast. No alternatives mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

meteoswissStationsAInspect

List and search MeteoSwiss automatic weather stations. Filter by name, canton, or browse the full network of ~160 stations across Switzerland.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results (1-200, default 20)
cantonNoFilter by canton abbreviation (e.g., "ZH", "BE", "GR")
searchNoSearch by station name or abbreviation
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adds valuable context about the data scope (~160 stations across Switzerland) but does not explicitly confirm read-only safety, rate limits, or return structure that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single efficient sentence with zero waste. It front-loads the core action ('List and search') and follows with filtering options and network scope. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple listing tool with 3 optional parameters and 100% schema coverage, the description is adequate. It explains the conceptual return (stations) and data volume. It could be improved by noting that results typically feed into sibling weather tools, but this is not strictly necessary for basic completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters (limit, canton, search). The description mentions filtering by name and canton, which aligns with the schema, but adds no additional semantic context beyond what the schema already provides. Baseline 3 is appropriate given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'List and search MeteoSwiss automatic weather stations,' providing a specific verb and resource. It distinguishes itself from siblings like meteoswissCurrentWeather and meteoswissLocalForecast by focusing on station metadata rather than weather data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains filtering capabilities (by name, canton) and mentions browsing the full network, implying exploratory usage. However, it lacks explicit guidance on when to use this versus the generic 'search' tool or prerequisites for using weather-related siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.