Skip to main content
Glama

Server Details

mercury-number MCP — wraps StupidAPIs (requires X-API-Key)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-mercury-number
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 5 of 5 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes: ask_pipeworx for queries, discover_tools for tool search, and the three memory tools for storage. However, ask_pipeworx could potentially overlap with individual data tools if they exist, but since it's designed to be a universal interface, it is likely distinct enough. The memory tools are clearly separated by operation.

Naming Consistency4/5

Tool names follow a verb_noun pattern (ask_pipeworx, discover_tools, forget, recall, remember) with consistent lowercase and underscores. However, 'ask_pipeworx' includes the server name which slightly breaks the pattern, but overall it's very consistent.

Tool Count5/5

With 5 tools, the set is compact and focused. The tools cover two clear domains: querying/discovery and memory management. This is an appropriate size for a server that seems to act as a front-end to a larger tool ecosystem (Pipeworx) and adds memory persistence.

Completeness4/5

The memory tools provide full CRUD: remember (create/update), recall (read), forget (delete). The query tools cover the main use case of asking questions and discovering tools. However, there is no explicit tool for listing or managing history or for updating memories in-place, but that is a minor gap.

Available Tools

5 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must cover behavioral traits. It explains that the tool automatically picks the right tool and fills arguments, which is useful. But it doesn't disclose side effects, latency, or if the tool may fail for unsupported questions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at three sentences plus examples. It is front-loaded with the core purpose. However, the examples add length and could be considered slightly excessive for such a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one param, no output schema, no nested objects) and no annotations, the description adequately covers its behavior and usage. It explains what the tool does and how to use it, leaving little ambiguity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single parameter 'question'. The description adds meaning by clarifying the parameter is natural language and showing examples, but doesn't add much beyond what the schema's description already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool accepts a natural language question and returns an answer from the best data source. It distinguishes itself from sibling tools by its general question-answering role rather than specific operations like discover_tools or remember.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance to just describe what you need without browsing tools or learning schemas, and includes three diverse examples. However, it doesn't specify when NOT to use it or mention alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It explains the tool returns a list of tools with names and descriptions, which is a core behavioral trait. It doesn't detail sorting or ranking behavior, but the purpose is clear. No contradictions with missing annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, all meaningful: explains what it does, what it returns, and when to use it. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search/discovery tool with two well-documented parameters and no output schema, the description is complete. It tells the agent what to expect (list of tools with names and descriptions) and when to use it (first, when many tools are available).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no additional parameter information beyond what the schema provides, which is acceptable since the schema already describes both parameters with good detail.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'search' and resource 'tool catalog', clearly stating it returns relevant tools with names and descriptions. It distinguishes from siblings like 'ask_pipeworx' by being a tool discovery mechanism rather than a direct Q&A or memory tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task', giving clear when-to-use guidance. The name 'discover_tools' and sibling context imply it's the primary search tool, and the description reinforces that.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetBInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It indicates deletion (destructive action) but does not disclose any behavioral traits such as whether the deletion is permanent, whether it requires specific permissions, or whether it returns a confirmation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that conveys the essential information without any wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple delete operation with one parameter and no output schema, the description is minimally complete. However, it could mention whether the operation is idempotent or what happens if the key does not exist.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (only one parameter), and the description adds clarity by using 'by key' to specify the parameter's role beyond the schema's 'Memory key to delete'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Delete a stored memory by key', which is a specific verb ('Delete') and resource ('stored memory'), and it distinguishes from siblings like 'remember' (store) and 'recall' (retrieve).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide guidance on when to use this tool versus alternatives, nor does it mention prerequisites or side effects. It only states the action, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It explains the two modes and the purpose (retrieve context), but does not mention any side effects, performance implications, or limits. However, the tool is a simple retrieval, so this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no wasted words. It front-loads the purpose and follows with usage guidance. Every sentence is essential.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single optional parameter, no output schema, no nested objects), the description is nearly complete. It could optionally mention that keys are case-sensitive or what happens if a key doesn't exist, but these are minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already defines the parameter well. The description adds value by explaining the semantic difference between providing a key (specific retrieval) vs omitting it (list all). This goes beyond the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'retrieve' and resource 'memory by key', and distinguishes the two modes of operation: retrieving a specific key or listing all keys. This differentiates it from siblings like 'remember' and 'forget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use it: to retrieve context saved earlier in the session or previous sessions. It does not explicitly mention when not to use it or alternatives, but the context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It correctly indicates this is a write operation and mentions persistence duration, but does not disclose potential side effects (e.g., overwriting existing keys) or whether there are limits on memory size or number of entries.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three short, informative sentences. Front-loaded with the core purpose, then usage context, then persistence details. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple schema (2 string params) and no output schema, the description covers the tool's behavior adequately. The only gap is the lack of details on overwrite behavior or capacity limits, but overall it's complete enough for typical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so schema already describes both parameters well. Description adds minimal extra meaning beyond examples of key usage. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool stores a key-value pair in session memory, with specific examples of use cases. Differentiates from sibling tools like 'forget' and 'recall' by describing its write operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use: saving intermediate findings, user preferences, or context across tool calls. Also distinguishes between authenticated (persistent) and anonymous (24-hour) sessions, which helps the agent decide based on user context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.