climate
Server Details
Climate MCP — wraps Open-Meteo Climate API (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-climate
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 7 of 7 tools scored. Lowest: 2.9/5.
The tools have some clear distinctions, such as compare_models vs get_climate_projection for climate data, and the memory tools (remember, recall, forget) form a coherent set. However, ask_pipeworx and discover_tools overlap significantly in purpose—both help find or access tools/data without specifying schemas, which could cause confusion for an agent trying to choose between them. This overlap reduces clarity in the tool set.
The naming is inconsistent with mixed conventions: ask_pipeworx and discover_tools use verb_noun patterns, but get_climate_projection and compare_models use verb_adjective_noun, while remember, recall, and forget are single verbs. There's no uniform style, making the set less predictable and harder for an agent to parse systematically.
With 7 tools, the count is reasonable and well-scoped for a climate-focused server. It includes core climate data tools, memory management, and utility functions, avoiding bloat. However, the overlap between ask_pipeworx and discover_tools slightly reduces efficiency, but overall the number is appropriate for the domain.
The server covers key areas like climate projections and memory management, but there are notable gaps. For a climate domain, tools for historical data, emissions, or policy information are missing, limiting comprehensive coverage. The memory tools provide good session support, but the climate-specific surface feels incomplete for broader agent tasks.
Available Tools
7 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it interprets natural language questions, selects appropriate data sources, executes queries, and returns results. However, it doesn't mention limitations like response time, error handling, or data source constraints, leaving some behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured: the first sentence states the core functionality, the second explains the mechanism, the third provides usage guidance, and examples follow. Every sentence adds value without redundancy, making it easy to understand the tool's purpose and use case quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (natural language processing and tool selection) and the absence of both annotations and an output schema, the description does a good job explaining what the tool does and how to use it. However, it doesn't describe the format or structure of returned answers, which would be helpful since there's no output schema to provide that information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'question' well-documented in the schema. The description adds value by emphasizing the natural language aspect ('plain English', 'describe what you need') and providing concrete examples that illustrate the expected format and scope of questions, going beyond the schema's basic documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'). The examples further illustrate the scope and distinguish it from sibling tools like 'compare_models' or 'discover_tools'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' This provides clear guidance to use this tool for natural language queries instead of manually selecting or configuring other tools. The examples reinforce this by showing typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_modelsBInspect
Compare temperature projections across climate models for the same location and date range. Returns side-by-side daily mean temperatures to assess model agreement and uncertainty.
| Name | Required | Description | Default |
|---|---|---|---|
| end_date | Yes | End date in YYYY-MM-DD format (must be between 1950 and 2050). | |
| latitude | Yes | Latitude of the location in decimal degrees. | |
| longitude | Yes | Longitude of the location in decimal degrees. | |
| start_date | Yes | Start date in YYYY-MM-DD format (must be between 1950 and 2050). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While it states the tool compares temperature projections, it doesn't reveal important behavioral traits such as what format the comparison output takes (e.g., table, chart, summary statistics), whether there are rate limits, authentication requirements, or how missing data is handled. The description is insufficient for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently communicates the tool's purpose without any wasted words. It's appropriately sized and front-loaded with the core functionality, making it easy for an AI agent to quickly understand what the tool does.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of comparing climate models and the absence of both annotations and an output schema, the description is incomplete. It doesn't explain what the comparison output looks like, how differences are presented, or any behavioral constraints. For a tool with no structured metadata about its behavior or outputs, the description should provide more contextual information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with all four parameters clearly documented in the input schema (latitude, longitude, start_date, end_date with format and range constraints). The description doesn't add any parameter-specific information beyond what's already in the schema, so it meets the baseline score of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('compare daily mean temperature projections') and identifies the exact resources involved (three named climate models: EC_Earth3P_HR, MPI_ESM1_2_XR, FGOALS_f3_H). It distinguishes this tool from its sibling 'get_climate_projection' by specifying it compares across multiple models rather than retrieving a single projection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying what the tool does (compare three specific models), but it doesn't explicitly state when to use this tool versus its sibling 'get_climate_projection' or provide any exclusion criteria. The context is clear but lacks explicit guidance on alternatives or when-not scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the search functionality and returns 'the most relevant tools with names and descriptions', but lacks details on error handling, performance characteristics (e.g., response time), or any limitations beyond the limit parameter mentioned in the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by usage guidance in the second. Both sentences earn their place by providing essential information without redundancy, making it highly efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search functionality with 2 parameters) and no annotations or output schema, the description is mostly complete. It covers purpose, usage context, and high-level behavior, but could improve by addressing potential search limitations or result format details to fully compensate for the lack of structured output information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the baseline is 3. The description adds minimal parameter semantics beyond the schema, mentioning 'describing what you need' which aligns with the query parameter but doesn't provide additional context or examples not already in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resource ('tool catalog'), and explicitly distinguishes it from siblings by emphasizing its role in finding tools among 500+ options, unlike compare_models or get_climate_projection which serve different functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Call this FIRST when you have 500+ tools available and need to find the right ones for your task'), including a specific condition (500+ tools) and a clear alternative scenario (using it as an initial step rather than directly invoking other tools).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Delete' implies a destructive mutation, it doesn't specify whether deletion is permanent, reversible, requires specific permissions, or has side effects. It also doesn't describe the response format or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action and resource, making it immediately understandable without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description is incomplete. It doesn't address critical behavioral aspects like permanence, error handling, or response format. Given the complexity of a delete operation and lack of structured coverage, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'key' parameter fully documented. The description adds no additional meaning beyond what the schema provides (e.g., no context about valid key formats or examples). With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and the target resource ('a stored memory by key'), providing a specific verb+resource combination. However, it doesn't differentiate from sibling tools like 'recall' or 'remember', which likely handle memory retrieval/creation rather than deletion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing memory key), exclusions, or relationships to sibling tools like 'recall' (for retrieval) or 'remember' (for creation).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_climate_projectionAInspect
Get historical and future temperature and precipitation projections for a location (1950–2050). Returns daily forecasts with temperature, precipitation, and weather conditions.
| Name | Required | Description | Default |
|---|---|---|---|
| end_date | Yes | End date in YYYY-MM-DD format (must be between 1950 and 2050). | |
| latitude | Yes | Latitude of the location in decimal degrees. | |
| longitude | Yes | Longitude of the location in decimal degrees. | |
| start_date | Yes | Start date in YYYY-MM-DD format (must be between 1950 and 2050). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the date range constraint but lacks details on permissions, rate limits, error handling, or response format. For a data-fetching tool with no annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose, data source, model, and constraints without any wasted words. It is appropriately sized and front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of climate data retrieval and the absence of both annotations and an output schema, the description is moderately complete. It covers the core functionality and constraints but lacks details on behavioral aspects like response format, error conditions, or performance characteristics, which are important for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters clearly documented in the input schema (latitude, longitude, start_date, end_date). The description adds value by specifying the date range constraint (1950-2050) and mentioning the climate model, but does not provide additional semantic details beyond what the schema already covers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get'), resource ('long-term climate projection data'), and scope ('temperature and precipitation for a location'), distinguishing it from the sibling tool 'compare_models' by specifying a single model (EC_Earth3P_HR) and data source (Open-Meteo Climate API).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by specifying the date range constraint (1950-2050) and the high-resolution model used, which helps determine when to use this tool. However, it does not explicitly mention when not to use it or compare it to the sibling 'compare_models' tool for alternative scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's dual behavior (retrieve by key vs. list all) and persistence scope ('session or previous sessions'), though it doesn't mention potential limitations like memory size constraints or retrieval failures.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place. The first sentence explains the core functionality, and the second provides usage context. No wasted words, and information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no annotations and no output schema, the description provides good coverage of purpose, usage, and parameter semantics. However, it doesn't describe the return format (what a 'memory' contains or the structure of the list), which would be helpful given the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds meaningful context by explaining the semantic effect of omitting the key parameter ('omit to list all keys'), which clarifies the tool's dual functionality beyond what the schema alone provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from sibling tools like 'remember' (store) and 'forget' (delete) by focusing on retrieval operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'to retrieve context you saved earlier in the session or in previous sessions.' It also specifies when to omit the key parameter ('omit key to list all keys'), giving clear operational instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the persistence differences between authenticated users ('persistent memory') and anonymous sessions ('last 24 hours'), and the tool's purpose for cross-call context. However, it lacks details on potential limitations like storage capacity or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with two sentences that efficiently convey purpose and behavioral context. Every sentence earns its place: the first states the core function, and the second adds critical persistence details, with no wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is largely complete. It covers purpose, usage context, and key behavioral traits like persistence. However, it lacks information on return values or error handling, which would be beneficial since there's no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with both parameters ('key' and 'value') well-documented in the schema. The description does not add significant meaning beyond the schema, as it only generically references 'key-value pair' without providing additional syntax, format, or constraints. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'recall' (retrieve) and 'forget' (remove). It explicitly mentions what can be stored ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), but does not explicitly mention when not to use it or name alternatives. It implies usage scenarios without specifying exclusions or comparing to siblings like 'recall' for retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!