Skip to main content
Glama

Server Details

Marine MCP — wraps marine-api.open-meteo.com (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-marine
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 6 of 6 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, with clear separation between weather-related tools (get_current_waves, get_wave_forecast) and memory tools (remember, recall, forget). However, discover_tools is somewhat ambiguous as it could be confused with a general search function rather than specifically for the Pipeworx tool catalog, potentially causing misselection in contexts where tool discovery is needed.

Naming Consistency4/5

The naming follows a consistent verb_noun pattern throughout, such as get_current_waves, get_wave_forecast, and discover_tools. There are minor deviations with recall and forget, which are single verbs, but they still fit the action-oriented style and maintain readability without mixing conventions like camelCase or snake_case.

Tool Count4/5

With 6 tools, the count is reasonable for the server's purpose, which appears to combine marine weather and memory management. It is slightly under the typical 3-15 range but still well-scoped, with each tool serving a clear function without unnecessary redundancy or excessive complexity.

Completeness3/5

The tool surface covers basic marine weather (current and forecast) and memory operations (store, retrieve, delete), but there are notable gaps. For example, there is no tool for historical wave data, location-based searches beyond the catalog, or more advanced memory management like bulk operations, which could limit agent workflows in comprehensive marine or data-handling tasks.

Available Tools

7 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that Pipeworx 'picks the right tool, fills the arguments, and returns the result,' which adds useful behavioral context about automation. However, it lacks details on error handling, rate limits, or authentication needs, leaving gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by explanatory details and concrete examples. Every sentence adds value: the first explains the tool's function, the second describes its automation, and the third provides usage guidance with examples. No wasted words, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (natural language processing to select tools) and lack of annotations/output schema, the description does well by explaining the automation process and providing examples. However, it could better address potential limitations or error cases. For a tool with no structured behavioral data, it's largely complete but not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'question' parameter documented as 'Your question or request in natural language.' The description reinforces this by stating 'Ask a question in plain English' and providing examples, adding semantic context beyond the schema. With only one parameter, this is sufficient for a high baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'), distinguishing it from siblings like discover_tools or recall. The examples further clarify its scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' This contrasts with alternatives that might require manual tool selection or schema knowledge, providing clear guidance on its use case versus other tools on the server.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: it's a search operation that returns 'the most relevant tools with names and descriptions', and it has a default/max limit context (implied by the input schema's limit parameter description, which the description doesn't repeat). However, it doesn't mention error conditions, rate limits, or authentication needs, leaving some gaps for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, the second adds critical usage context, and every sentence earns its place by providing essential guidance without redundancy. It's concise and well-structured for effective agent understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search function with 2 parameters), no annotations, and no output schema, the description is mostly complete: it covers purpose, usage, and behavioral output (returns tools with names/descriptions). However, it lacks details on output format or error handling, which could be helpful since there's no output schema, leaving a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the two parameters (query and limit). The description adds no additional parameter semantics beyond what's in the schema, such as query examples or limit usage context. This meets the baseline of 3 for high schema coverage without extra value from the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resources ('tool catalog'), and explicitly distinguishes it from sibling tools by mentioning it's for when 'you have 500+ tools available' and should be called 'FIRST' to find the right ones, making it distinct from the sibling tools get_current_waves and get_wave_forecast.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: it specifies when to use ('when you have 500+ tools available and need to find the right ones for your task'), when to call it ('Call this FIRST'), and implies alternatives by suggesting it's for initial discovery in a large catalog, contrasting with the sibling tools which appear to be specific data retrieval functions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetBInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only states the action ('Delete') without disclosing behavioral traits. It doesn't mention if deletion is permanent, requires specific permissions, has side effects, or what happens on success/failure, which is critical for a destructive operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with zero waste—it directly states the tool's purpose without fluff. Every word earns its place, making it highly efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a destructive tool with no annotations and no output schema, the description is incomplete. It lacks crucial context like what 'delete' entails (e.g., irreversible?), error handling, or return values, leaving significant gaps for safe agent operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'key' parameter fully. The description adds no additional meaning beyond implying the key identifies a memory to delete, aligning with the schema but not compensating for gaps (none exist here). Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Delete') and resource ('a stored memory by key'), distinguishing it from sibling tools like 'recall' (likely retrieves) and 'remember' (likely stores). It's precise and avoids tautology with the tool name 'forget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., that a memory must exist to delete), exclusions, or how it relates to siblings like 'recall' or 'remember', leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_current_wavesAInspect

Check real-time wave conditions at a coastal location. Returns current wave height, period, and direction. Use for immediate surfing, boating, or maritime planning decisions.

ParametersJSON Schema
NameRequiredDescriptionDefault
latitudeYesLatitude of the location.
longitudeYesLongitude of the location.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the return values (wave height, period, direction) and temporal scope ('right now'), which is useful. However, it doesn't mention potential limitations like data availability, accuracy, rate limits, or error conditions that would help the agent understand behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences that are front-loaded with the core purpose and efficiently convey key information without any wasted words. Every sentence adds value by specifying what it does and what it returns.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 required parameters, no output schema, no annotations), the description is reasonably complete. It covers the purpose, return values, and temporal scope. However, without annotations or an output schema, it could benefit from more detail on behavioral aspects like error handling or data sources to be fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (latitude and longitude) adequately. The description adds no additional parameter information beyond what the schema provides, such as coordinate format or valid ranges, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get current wave conditions'), the resource ('for a coastal location'), and the scope ('right now'). It distinguishes from the sibling tool 'get_wave_forecast' by specifying current conditions versus forecast data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'current wave conditions' and 'right now', suggesting this tool is for real-time data. However, it doesn't explicitly state when to use this versus the sibling 'get_wave_forecast' or provide any exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_wave_forecastAInspect

Get multi-day wave forecasts for coastal locations (e.g., "Hawaii", "California Coast"). Returns max wave height, period, and dominant direction per day. Use when planning water activities or monitoring upcoming swell.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of forecast days (1-7, default 7).
latitudeYesLatitude of the location.
longitudeYesLongitude of the location.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes what the tool returns but doesn't mention rate limits, authentication requirements, error conditions, or whether the data is cached/live. It adequately describes the core behavior but lacks operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently communicates purpose, scope, and output. Every word earns its place with no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only forecasting tool with no output schema, the description provides sufficient context about what data is returned. However, it could benefit from mentioning typical response format or data sources. The combination of clear purpose and parameter documentation makes it mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all three parameters. The description doesn't add any parameter-specific information beyond what's in the schema, such as explaining coordinate systems or day range implications. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get a multi-day daily wave forecast'), the resource ('for a coastal location'), and the output format ('maximum wave height, wave period, and dominant wave direction per day'). It distinguishes from the sibling tool 'get_current_waves' by specifying it's a forecast rather than current conditions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (for multi-day forecasts rather than current conditions), which differentiates it from 'get_current_waves'. However, it doesn't explicitly state when NOT to use it or mention any prerequisites or alternatives beyond the sibling tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the tool's behavior (retrieval/listing of memories) and context (session-based or previous sessions), but lacks details on error handling, performance characteristics, or data format. It adequately covers basic functionality but misses deeper behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: two concise sentences that directly state the tool's purpose and usage. Every sentence earns its place by providing essential information without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (retrieval/listing with one optional parameter) and no annotations or output schema, the description is somewhat complete but lacks details on return values, error cases, or memory persistence. It covers basic usage but leaves gaps in full operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'key' parameter. The description adds value by explaining the semantics: 'omit key' to list all keys, and that the key retrieves 'previously stored memory.' This clarifies usage beyond the schema's technical description, though it doesn't add syntax or format details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from siblings by specifying it's for retrieving context saved earlier, unlike 'remember' (likely for saving) or 'forget' (likely for deleting).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: 'Retrieve a previously stored memory by key, or list all stored memories (omit key).' It also specifies context: 'Use this to retrieve context you saved earlier in the session or in previous sessions,' clearly indicating its role in memory retrieval rather than other operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool stores data in session memory, distinguishes between authenticated (persistent) and anonymous (24-hour) sessions, and implies it's a write operation. It adds context beyond the schema, such as session duration and authentication effects, though it could mention limitations like storage capacity or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with two sentences that efficiently convey purpose, usage, and behavioral context without redundancy. Each sentence adds value: the first defines the action and use cases, and the second clarifies persistence rules, making it concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 required parameters, no output schema, no annotations), the description is largely complete. It covers purpose, usage, and key behavioral aspects like session persistence. However, it lacks details on return values (since no output schema exists) and potential errors or limitations, leaving minor gaps in full context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('key' and 'value') with examples. The description adds minimal semantic value beyond the schema by implying usage contexts ('findings, addresses, preferences, notes'), but it does not provide additional syntax, constraints, or format details. This meets the baseline of 3 when the schema handles most documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('store a key-value pair') and resource ('in your session memory'), distinguishing it from siblings like 'recall' (retrieval) and 'forget' (deletion). It explicitly mentions what can be saved ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous and well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls') and hints at alternatives by mentioning persistence differences for authenticated vs. anonymous users. However, it does not explicitly name sibling tools like 'recall' or 'forget' as alternatives, nor does it specify when not to use it (e.g., for temporary vs. long-term storage).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.