MeteoSwiss MCP Server
Server Details
Swiss weather data for AI assistants — forecasts, measurements, stations, pollen.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- eins78/meteoswiss-llm-tools
- GitHub Stars
- 1
- Server Listing
- MeteoSwiss MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 7 of 7 tools scored.
Each tool has a clearly distinct purpose: fetch retrieves webpage content, meteoswissClimateData provides historical climate series, meteoswissCurrentWeather gives real-time measurements, meteoswissLocalForecast offers multi-day forecasts, meteoswissPollenData shows pollen concentrations, meteoswissStations lists station metadata, and search finds webpages. There is no overlap in functionality, making tool selection straightforward for an agent.
Most tools follow a consistent 'meteoswiss' prefix with descriptive suffixes (e.g., ClimateData, CurrentWeather, LocalForecast), but 'fetch' and 'search' deviate from this pattern. The naming is still readable and logical, with only minor inconsistencies that do not hinder usability.
With 7 tools, the server is well-scoped for providing weather and climate data from MeteoSwiss. Each tool serves a specific, essential function (e.g., historical data, current conditions, forecasts, pollen info, station search), and none feel redundant or missing, making the count appropriate for the domain.
The tool set offers comprehensive coverage for accessing MeteoSwiss data: it includes historical climate data, real-time weather, forecasts, pollen information, station metadata, and web content retrieval. There are no obvious gaps; agents can handle typical queries like temperature trends, current conditions, or allergy alerts without dead ends.
Available Tools
7 toolsfetchAInspect
Fetch full content from a MeteoSwiss webpage and convert to markdown or plain text. Use the search tool first to discover valid page URLs, then pass the full URL as the id parameter.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Identifier of a MeteoSwiss page to fetch. For this server the id is a full URL returned by the search tool. Example: https://www.meteoschweiz.admin.ch/klima/klimawandel/steigende-temperaturen.html | |
| format | No | The output format for the content | markdown |
| includeMetadata | No | Whether to include metadata in the response |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the content conversion behavior ('convert to markdown or plain text') but omits other critical behavioral traits such as error handling for invalid URLs, rate limits, content size limits, or whether the operation is idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: the first establishes purpose and output formats, the second provides the critical usage workflow. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While no output schema exists, the description indicates the return format (markdown/text) and the schema is fully documented. Lacks explicit description of the response structure (e.g., whether it returns a string or object), but adequately covers the tool's functionality given its straightforward purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description reinforces that 'id' should be a 'full URL' and references the search-first workflow, but this largely duplicates the schema's own description which already includes the example URL and search guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Fetch', 'convert') and identifies the exact resource ('MeteoSwiss webpage') and output formats ('markdown or plain text'). It clearly distinguishes this generic webpage fetcher from weather-data siblings like meteoswissCurrentWeather and meteoswissLocalForecast.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly prescribes the workflow: 'Use the search tool first to discover valid page URLs, then pass the full URL...'. This directly references the sibling 'search' tool and establishes clear prerequisites, preventing errors from guessing URLs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
meteoswissClimateDataAInspect
Get homogeneous climate measurement series from Switzerland's National Basic Climatic Network (NBCN). Returns temperature, precipitation, sunshine, radiation, wind, pressure, and climate indicators (frost days, summer days, heat days) going back decades.
29 climate stations + 46 precipitation stations with daily, monthly, and yearly resolution.
Use cases: "What are typical January temperatures in Zurich?", "How has precipitation changed in Basel over 50 years?", "How many heat days did Lugano have last year?"
Accepts station names ("Zurich", "Basel"), abbreviations ("SMA", "BAS"), or WGS84 coordinates.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of data rows to return (default 30) | |
| station | No | Climate station name or abbreviation (e.g., "Zurich", "BAS", "Davos"). Part of the National Basic Climatic Network (29 climate + 46 precipitation stations). | |
| end_date | No | End date filter (YYYY-MM-DD). Only rows on or before this date are returned. | |
| resolution | No | Data resolution: daily (temp min/max/mean), monthly (full climate summary), yearly (annual summary) | monthly |
| start_date | No | Start date filter (YYYY-MM-DD). Only rows on or after this date are returned. | |
| coordinates | No | WGS84 coordinates (alternative to station name) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It does well by specifying the data sources (29 climate stations + 46 precipitation stations), temporal scope (data going back decades), and available variables (temperature, precipitation, etc.). However, it doesn't mention rate limits, authentication requirements, or potential data availability constraints for specific stations/time periods.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero wasted sentences. It opens with the core purpose, lists available data types, specifies station counts and resolutions, provides concrete use cases, and explains parameter options - all in a compact format where every sentence adds essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 6 parameters, 100% schema coverage, but no output schema, the description provides strong context about what data is returned (temperature, precipitation, etc.), station network details, and temporal scope. The main gap is lack of output format details (structure of returned data), but the description compensates well with rich content about data characteristics and use cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds meaningful context by explaining station input options (names, abbreviations, coordinates) and providing concrete examples ('Zurich', 'BAS', 'Davos'), which helps users understand how to properly identify stations beyond what the schema's technical descriptions provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'gets homogeneous climate measurement series' from Switzerland's NBCN, specifying the exact resource (climate data) and action (retrieve). It distinguishes from siblings like meteoswissCurrentWeather (current conditions) and meteoswissPollenData (different data type) by focusing on historical climate series with multiple variables and resolutions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage examples ('What are typical January temperatures in Zurich?', 'How has precipitation changed in Basel over 50 years?') that clearly indicate when to use this tool for historical climate analysis versus alternatives like meteoswissCurrentWeather for current conditions. The examples demonstrate the tool's purpose for trend analysis and historical queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
meteoswissCurrentWeatherAInspect
Get real-time weather measurements from ~300 Swiss automatic weather stations (~160 full weather + ~140 precipitation-only). Returns temperature, precipitation, wind, humidity, pressure, sunshine, and more. Data updates every 10 minutes. Precipitation-only stations return only rainfall data.
For 8 stations (Zurich, Basel, Chur, Sion, Altdorf, Säntis, Jungfraujoch, Grand St-Bernard), also includes daily visual observations: cloud cover, fog, rain, snowfall, hail, and snow coverage.
Accepts station names ("Zurich"), abbreviations ("SMA"), addresses ("Bahnhofplatz 1 Bern"), or WGS84 coordinates. Automatically finds the nearest station.
| Name | Required | Description | Default |
|---|---|---|---|
| station | No | Swiss weather station or location: name (e.g., "Zurich"), abbreviation (e.g., "SMA"), or address (e.g., "Bahnhofplatz 1 Bern") | |
| coordinates | No | WGS84 coordinates (alternative to station name) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses critical behavioral traits: data freshness ('updates every 10 minutes') and geolocation logic ('Automatically finds the nearest station'). It misses error handling or rate limits, but covers the essential operational characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four tightly constructed sentences with zero waste: sentence 1 establishes purpose, sentence 2 details return values, sentence 3 states update cadence, and sentence 4 covers input formats and resolution behavior. Information is front-loaded effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately compensates by listing specific return data types (temperature, precipitation, wind, etc.) and scope (~160 stations). It lacks explicit error-handling documentation but is otherwise complete for tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), the description adds concrete example values ('Zurich', 'SMA', 'Bahnhofplatz 1 Bern') and clarifies the relationship between parameters (coordinates are an 'alternative to station name'), enhancing the raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and resource ('real-time weather measurements from ~160 Swiss automatic weather stations'), clearly distinguishing it from siblings like 'meteoswissLocalForecast' (forecast vs. current) and 'meteoswissPollenData' (pollen vs. weather metrics).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through domain terminology ('real-time' vs. siblings with 'forecast' in names) and explains input flexibility, but lacks explicit when-to-use guidance or named alternatives (e.g., it doesn't state 'use meteoswissLocalForecast for future predictions').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
meteoswissLocalForecastAInspect
Get a multi-day weather forecast for any Swiss location. Returns daily summaries with temperature, precipitation, and weather icons.
This uses official MeteoSwiss Open Data — the same forecasts powering the MeteoSwiss app and website.
Accepts:
Postal codes: "8001" (Zurich), "3000" (Bern), "1200" (Geneva)
Station abbreviations: "ZUE" (Zurich Fluntern), "BER" (Bern)
Place names: "Zurich", "Basel", "Lugano"
Coverage: ~6000 Swiss locations (all postal codes + weather stations + mountain points). Forecast horizon: up to 9 days. Updated hourly.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | Number of forecast days (1-9, default 5) | |
| location | Yes | Swiss location: postal code (e.g., "8001"), station abbreviation (e.g., "ZUE"), or place name (e.g., "Zurich") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and succeeds well: it discloses data source ('official MeteoSwiss Open Data'), update frequency ('Updated hourly'), coverage limits ('~6000 Swiss locations'), and return format ('daily summaries with temperature, precipitation, and weather icons'). Lacks rate limits or error behavior details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure: purpose front-loaded in sentence one, provenance in sentence two, then a bulleted list clarifying input formats, followed by coverage metadata. No wasted words; every sentence provides unique information not redundant with the schema or annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description compensates thoroughly by describing return values ('daily summaries with temperature, precipitation, and weather icons'), input validation constraints (postal code vs. station formats), and operational bounds (9-day limit, hourly updates). Complete for a 2-parameter read-only tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds significant value by providing richer examples for the location parameter than the schema alone: parenthetical city names for postal codes ('8001' (Zurich)), full station names ('ZUE' (Zurich Fluntern)), and additional place examples ('Basel', 'Lugano'). It also reinforces the days constraint via 'Forecast horizon: up to 9 days'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Get a multi-day weather forecast for any Swiss location'—a specific verb, resource, and geographic scope. It clearly distinguishes from sibling tools like meteoswissCurrentWeather (multi-day vs. current) and meteoswissPollenData (weather vs. pollen data) through explicit temporal scope ('multi-day', 'up to 9 days').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context for when to use (multi-day forecasts, up to 9 days) and implicitly contrasts with current weather via the name and 'multi-day' phrasing. However, it does not explicitly name sibling alternatives or state when NOT to use it (e.g., 'for current conditions, use meteoswissCurrentWeather instead').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
meteoswissPollenDataAInspect
Get current pollen concentration data from MeteoSwiss monitoring stations (~15 stations across Switzerland). Shows pollen levels by type (birch, grass, etc.). Useful for allergy sufferers.
| Name | Required | Description | Default |
|---|---|---|---|
| station | No | Pollen monitoring station name or abbreviation. Omit for an overview of all stations. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations exist, description carries full burden. It adds valuable behavioral context: temporal scope ('current' data), coverage limitations ('~15 stations'), and data granularity ('pollen levels by type'). However, it omits operational details like rate limits, authentication requirements, or error behaviors that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each earning its place: purpose/scope, data types, and use case. No redundant or filler text. Front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with 100% schema coverage, description adequately covers data content (pollen types) and scope. Lacks explicit return value structure details (no output schema exists), but mentions 'shows pollen levels' giving partial insight into the response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds minimal parameter-specific guidance beyond schema, though '~15 stations across Switzerland' provides useful context for what valid station values might look like. No syntax or format details added beyond schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'Get' + resource 'pollen concentration data' + source 'MeteoSwiss monitoring stations'. Explicitly distinguishes from weather-focused siblings via content (birch, grass pollen) and scope (~15 stations across Switzerland).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context ('Useful for allergy sufferers') but lacks explicit when-to-use/when-not-to-use guidance regarding siblings like meteoswissCurrentWeather or meteoswissLocalForecast. No alternatives mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
meteoswissStationsAInspect
List and search MeteoSwiss automatic weather stations. Filter by name, canton, or browse the full network of ~160 stations across Switzerland.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results (1-200, default 20) | |
| canton | No | Filter by canton abbreviation (e.g., "ZH", "BE", "GR") | |
| search | No | Search by station name or abbreviation |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable context about the data scope (~160 stations across Switzerland) but does not explicitly confirm read-only safety, rate limits, or return structure that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence with zero waste. It front-loads the core action ('List and search') and follows with filtering options and network scope. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple listing tool with 3 optional parameters and 100% schema coverage, the description is adequate. It explains the conceptual return (stations) and data volume. It could be improved by noting that results typically feed into sibling weather tools, but this is not strictly necessary for basic completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all three parameters (limit, canton, search). The description mentions filtering by name and canton, which aligns with the schema, but adds no additional semantic context beyond what the schema already provides. Baseline 3 is appropriate given complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'List and search MeteoSwiss automatic weather stations,' providing a specific verb and resource. It distinguishes itself from siblings like meteoswissCurrentWeather and meteoswissLocalForecast by focusing on station metadata rather than weather data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains filtering capabilities (by name, canton) and mentions browsing the full network, implying exploratory usage. However, it lacks explicit guidance on when to use this versus the generic 'search' tool or prerequisites for using weather-related siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchAInspect
Search MeteoSwiss website content in multiple languages (DE, FR, IT, EN). Returns relevant pages with URLs that can be passed to the fetch tool. Note: pagination may return duplicate results across pages (upstream API limitation).
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number for pagination (1-based) | |
| sort | No | Sort order for results. Note: date-asc severely degrades relevance — results are dominated by page age rather than query match. | relevance |
| query | Yes | The search query string | |
| language | No | The language for search results | de |
| pageSize | No | Number of results per page (max 100) | |
| contentType | No | Filter by content type. Defaults to "content" to exclude application pages. Use "publication" for official reports. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It adds valuable behavioral context about pagination returning duplicate results (upstream API limitation) and clarifies the return format (pages with URLs). Missing: error handling, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, zero waste. Front-loaded with specific purpose (Search MeteoSwiss...), followed by output description and pagination warning. Every sentence provides unique value not duplicated in structured fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 6-parameter search tool with no output schema. Description compensates by stating what gets returned (pages with URLs) and warns about pagination quirks. Would benefit from mention of error states or empty result handling, but sufficient given rich schema coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema comprehensively documents all 6 parameters including detailed warnings (e.g., date-asc degrades relevance). Description mentions multi-language capability but doesn't add significant parameter-specific semantics beyond what the schema provides. Baseline 3 appropriate when schema does heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool searches MeteoSwiss website content, specifies the multi-language capability (DE, FR, IT, EN), and distinguishes itself from sibling weather data tools (meteoswissCurrentWeather, etc.) by focusing on website content retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly mentions that returned URLs 'can be passed to the fetch tool', establishing the intended workflow with sibling tool 'fetch'. However, it doesn't explicitly state when to use search versus directly using fetch if a URL is already known.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.