climate
Server Details
Climate MCP — wraps Open-Meteo Climate API (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-climate
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 2 of 2 tools scored.
The two tools have clearly distinct purposes: compare_models focuses on comparing daily mean temperature across three specific models for a date range, while get_climate_projection provides long-term temperature and precipitation data from a single model. There is no overlap in functionality, making it easy for an agent to select the right tool.
Both tools follow a consistent verb_noun pattern (compare_models and get_climate_projection) using snake_case. The naming is predictable and readable, with no deviations in style or convention.
With only two tools, the server feels under-scoped for a climate domain that could include more operations like historical data retrieval, model-specific projections, or additional climate variables. Two tools is too few to adequately cover the apparent scope of climate data analysis.
The tool set has significant gaps: it lacks basic CRUD operations for climate data, such as retrieving historical data, accessing other climate models individually, or handling variables beyond temperature and precipitation. This incompleteness will likely cause agent failures when trying to perform common climate analysis tasks.
Available Tools
2 toolscompare_modelsBInspect
Compare daily mean temperature projections across three climate models (EC_Earth3P_HR, MPI_ESM1_2_XR, FGOALS_f3_H) for a location and date range.
| Name | Required | Description | Default |
|---|---|---|---|
| end_date | Yes | End date in YYYY-MM-DD format (must be between 1950 and 2050). | |
| latitude | Yes | Latitude of the location in decimal degrees. | |
| longitude | Yes | Longitude of the location in decimal degrees. | |
| start_date | Yes | Start date in YYYY-MM-DD format (must be between 1950 and 2050). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While it states the tool compares temperature projections, it doesn't reveal important behavioral traits such as what format the comparison output takes (e.g., table, chart, summary statistics), whether there are rate limits, authentication requirements, or how missing data is handled. The description is insufficient for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently communicates the tool's purpose without any wasted words. It's appropriately sized and front-loaded with the core functionality, making it easy for an AI agent to quickly understand what the tool does.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of comparing climate models and the absence of both annotations and an output schema, the description is incomplete. It doesn't explain what the comparison output looks like, how differences are presented, or any behavioral constraints. For a tool with no structured metadata about its behavior or outputs, the description should provide more contextual information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with all four parameters clearly documented in the input schema (latitude, longitude, start_date, end_date with format and range constraints). The description doesn't add any parameter-specific information beyond what's already in the schema, so it meets the baseline score of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('compare daily mean temperature projections') and identifies the exact resources involved (three named climate models: EC_Earth3P_HR, MPI_ESM1_2_XR, FGOALS_f3_H). It distinguishes this tool from its sibling 'get_climate_projection' by specifying it compares across multiple models rather than retrieving a single projection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying what the tool does (compare three specific models), but it doesn't explicitly state when to use this tool versus its sibling 'get_climate_projection' or provide any exclusion criteria. The context is clear but lacks explicit guidance on alternatives or when-not scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_climate_projectionAInspect
Get long-term climate projection data (temperature and precipitation) for a location using the EC_Earth3P_HR high-resolution climate model via the Open-Meteo Climate API. Date range must be between 1950 and 2050.
| Name | Required | Description | Default |
|---|---|---|---|
| end_date | Yes | End date in YYYY-MM-DD format (must be between 1950 and 2050). | |
| latitude | Yes | Latitude of the location in decimal degrees. | |
| longitude | Yes | Longitude of the location in decimal degrees. | |
| start_date | Yes | Start date in YYYY-MM-DD format (must be between 1950 and 2050). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the date range constraint but lacks details on permissions, rate limits, error handling, or response format. For a data-fetching tool with no annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose, data source, model, and constraints without any wasted words. It is appropriately sized and front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of climate data retrieval and the absence of both annotations and an output schema, the description is moderately complete. It covers the core functionality and constraints but lacks details on behavioral aspects like response format, error conditions, or performance characteristics, which are important for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters clearly documented in the input schema (latitude, longitude, start_date, end_date). The description adds value by specifying the date range constraint (1950-2050) and mentioning the climate model, but does not provide additional semantic details beyond what the schema already covers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get'), resource ('long-term climate projection data'), and scope ('temperature and precipitation for a location'), distinguishing it from the sibling tool 'compare_models' by specifying a single model (EC_Earth3P_HR) and data source (Open-Meteo Climate API).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by specifying the date range constraint (1950-2050) and the high-resolution model used, which helps determine when to use this tool. However, it does not explicitly mention when not to use it or compare it to the sibling 'compare_models' tool for alternative scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!