flood
Server Details
Flood MCP — wraps Open-Meteo Flood API (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-flood
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 6 of 6 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes, but get_flood_forecast and get_river_discharge overlap significantly as both provide flood-related discharge data, which could cause confusion. The other tools (discover_tools, forget, recall, remember) are clearly differentiated for tool discovery and memory management.
The naming follows a consistent verb_noun pattern (e.g., discover_tools, get_flood_forecast, recall), with all tools using snake_case. However, the verb 'get' is used for two flood tools while others use different verbs, which is a minor deviation from perfect consistency.
With 6 tools, the count is well-scoped for a server that combines flood forecasting with memory management and tool discovery. Each tool appears to serve a specific function without being overly sparse or bloated, fitting typical server scopes of 3-15 tools.
The server covers flood forecasting and memory operations, but there are notable gaps. For flood forecasting, it lacks tools for historical data, alerts, or multi-location queries, which are common in such domains. The memory tools are complete for basic CRUD, but the flood side feels incomplete.
Available Tools
7 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: the tool picks the right tool and fills arguments automatically, and returns a result. However, it lacks details on limitations (e.g., data source availability, error handling, rate limits) or output format, which are important for a tool with no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by operational details and examples. Every sentence adds value: the first defines the tool, the second explains its automation, and the third gives practical examples. It is efficiently structured without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a natural language query tool with automated backend processing), no annotations, and no output schema, the description is incomplete. It explains the input well but lacks information on what the output looks like (e.g., structured data, text summary) or potential constraints, which are critical for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description adds value by emphasizing 'plain English' and 'natural language' for the question parameter, and provides concrete examples that illustrate the expected format and scope, enhancing understanding beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and distinguishes from siblings by emphasizing natural language input and automated tool selection, unlike sibling tools like 'get_flood_forecast' which are specific to certain data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'No need to browse tools or learn schemas — just describe what you need.' It contrasts with alternatives by indicating this tool handles tool selection automatically, and includes examples ('What is the US trade deficit with China?', etc.) to illustrate appropriate use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (searching a tool catalog), what it returns (most relevant tools with names and descriptions), and when to use it. However, it doesn't mention potential limitations like rate limits, authentication needs, or error conditions, leaving some behavioral aspects uncovered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, the second explains the return value, and the third provides crucial usage guidance. Every sentence earns its place with no wasted words, making it highly efficient for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search functionality with 2 parameters), no annotations, and no output schema, the description does a good job of explaining purpose, usage, and returns. However, it doesn't describe the format of returned results (e.g., structured list vs. raw text) or potential errors, leaving some gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters (query and limit). The description doesn't add any parameter-specific information beyond what's in the schema, such as explaining how the query is processed or providing additional examples. This meets the baseline of 3 when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resource ('tool catalog'), and explicitly distinguishes it from siblings by emphasizing it should be called 'FIRST when you have 500+ tools available', making it distinct from the two listed sibling tools (get_flood_forecast, get_river_discharge) which appear to be specific data retrieval tools rather than catalog search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear context about when to use this tool (large catalog scenarios) versus alternatives, though it doesn't name specific alternatives, the guidance is sufficiently directive for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only states the action ('Delete') without disclosing behavioral traits such as whether deletion is permanent, requires specific permissions, or has side effects. It adds minimal value beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste, front-loading the core action. It is appropriately sized for a simple tool with one parameter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a deletion tool with no annotations and no output schema, the description is incomplete. It lacks details on behavioral implications (e.g., permanence, errors) and doesn't compensate for the missing structured data, leaving gaps in understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'key' parameter fully. The description adds no additional meaning beyond what the schema provides, such as examples or constraints, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Delete') and resource ('a stored memory by key'), making the purpose unambiguous. It doesn't explicitly differentiate from sibling tools like 'recall' or 'remember', but the action is specific enough to imply distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'recall' (which likely retrieves memories) or 'remember' (which likely stores them). The description lacks context about prerequisites or exclusions, leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_flood_forecastCInspect
Get multi-day flood forecast for a location with current, mean, and peak river discharge projections. Returns discharge statistics and severity assessment. Use to evaluate flood risk and timeline.
| Name | Required | Description | Default |
|---|---|---|---|
| latitude | Yes | Latitude of the location in decimal degrees. | |
| longitude | Yes | Longitude of the location in decimal degrees. | |
| forecast_days | No | Number of forecast days to retrieve (1–92). Defaults to 16. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions what data is returned, it doesn't describe important behavioral aspects like whether this is a read-only operation, potential rate limits, authentication requirements, error conditions, or data freshness. For a forecasting tool with no annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that clearly states the tool's purpose. It's appropriately sized for a simple data retrieval tool and front-loads the key information. There's no wasted verbiage, though it could potentially be more structured for complex tools.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a forecasting tool with 3 parameters and no output schema, the description provides basic purpose but lacks important context. It doesn't explain what format the forecast data returns, temporal resolution, confidence intervals, or how to interpret the discharge values. With no annotations and no output schema, the description should do more to help the agent understand what to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, providing clear documentation for all three parameters. The description adds no parameter-specific information beyond what's in the schema. With complete schema coverage, the baseline score of 3 is appropriate as the description doesn't enhance parameter understanding but doesn't need to compensate for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get a comprehensive flood forecast including river discharge, mean discharge, and max discharge for a location.' It specifies the verb ('Get'), resource ('flood forecast'), and key data elements. However, it doesn't explicitly differentiate from the sibling tool 'get_river_discharge' beyond mentioning 'comprehensive' flood forecast.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the sibling 'get_river_discharge'. It doesn't mention any prerequisites, alternatives, or exclusions. The agent must infer usage from the tool name and description alone without explicit context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_river_dischargeAInspect
Get daily river discharge forecast in cubic meters per second for a location. Returns discharge values with timestamps. Use to monitor water flow rates for flood risk assessment.
| Name | Required | Description | Default |
|---|---|---|---|
| latitude | Yes | Latitude of the location in decimal degrees. | |
| longitude | Yes | Longitude of the location in decimal degrees. | |
| forecast_days | No | Number of forecast days to retrieve (1–92). Defaults to 7. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It specifies the data source (Open-Meteo Flood API) and units (m³/s), which adds useful context beyond what the input schema provides. However, it doesn't describe important behavioral aspects like rate limits, authentication requirements, data freshness, or what the response format looks like.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the essential information without any wasted words. It's appropriately sized for this type of data retrieval tool and front-loads the key information (what it gets, what units, what API).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a data retrieval tool with 3 parameters, 100% schema coverage, but no output schema and no annotations, the description is adequate but has clear gaps. It explains what data is retrieved but doesn't describe the response format, which is important since there's no output schema. The mention of the specific API source adds useful context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so all parameters are well-documented in the schema. The description doesn't add any additional parameter semantics beyond what's already in the schema descriptions. It mentions 'geographic location' which aligns with the latitude/longitude parameters, but provides no new information about parameter usage or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get daily river discharge forecast'), resource ('river discharge forecast'), units ('m³/s'), and data source ('Open-Meteo Flood API'). It distinguishes from the sibling tool 'get_flood_forecast' by specifying it's for discharge data rather than general flood forecasting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (for river discharge forecasts at geographic locations), but doesn't explicitly state when NOT to use it or provide specific alternatives. The existence of a sibling tool 'get_flood_forecast' suggests there are related alternatives, but the description doesn't explain the distinction between them.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It explains the dual behavior (retrieve by key or list all) and mentions persistence across sessions, which is valuable. However, it doesn't disclose error handling, response format, or whether listing all memories might be resource-intensive for large datasets.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first explains the dual functionality, and the second provides usage context. Every word contributes to understanding without redundancy, making it appropriately concise and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description covers basic functionality well but lacks details on return values, error cases, or performance considerations. It's adequate for simple use but could be more complete given the absence of structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context beyond the 100% schema coverage. While the schema documents that 'key' is optional for listing, the description clarifies the semantic choice: 'omit key to list all keys' and ties it to retrieving 'context you saved earlier.' This enhances understanding of parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from siblings like 'remember' (store) and 'forget' (delete) by focusing on retrieval operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'to retrieve context you saved earlier in the session or in previous sessions.' It also specifies when to omit the key parameter to list all memories, giving clear usage instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and adds valuable behavioral context beyond basic functionality. It discloses persistence traits ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), which are crucial for understanding data retention and session behavior, though it does not cover aspects like error handling or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences that are front-loaded and efficient. The first sentence states the core purpose, and the second adds essential behavioral context without redundancy, making every sentence earn its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (storage with persistence rules), no annotations, and no output schema, the description is mostly complete. It covers purpose, usage, and key behavioral traits, but lacks details on return values or error cases, which would be beneficial for full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('key' and 'value') with examples. The description does not add significant meaning beyond what the schema provides, such as explaining parameter interactions or constraints, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from siblings like 'forget' (delete) and 'recall' (retrieve). It explicitly mentions what can be stored ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), but does not explicitly state when not to use it or name alternatives. It implies usage for persistence needs but lacks explicit exclusions or comparisons to sibling tools like 'recall' for retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!