Skip to main content
Glama

Server Details

emoji-oracle MCP — wraps StupidAPIs (requires X-API-Key)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-emoji-oracle
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 5 of 5 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: ask_pipeworx handles general queries, discover_tools finds tools, and the memory trio (remember, recall, forget) manage stored data. No ambiguity between them.

Naming Consistency4/5

Most tools use verb_noun pattern (ask_pipeworx, discover_tools, forgot, recall, remember) with uniform snake_case. ask_pipeworx is slightly unusual but consistent with the pattern.

Tool Count5/5

5 tools is well-suited for the server's purpose: one general Q&A, one tool discovery, and three memory operations. No bloat or deficiency.

Completeness5/5

The memory operations cover store, retrieve, and delete (CRUD completement) plus a general query tool and tool discovery. No obvious gaps for the stated domain of an oracle with memory.

Available Tools

5 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses that the tool picks the best data source and fills arguments automatically, which is helpful. However, it doesn't mention any potential rate limits, query complexity limits, or whether answers are cached. Still, for a general-purpose query tool, this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise at about 60 words, with key information front-loaded. Each sentence adds value: purpose, mechanism, and examples. Could be slightly more compact but overall well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema, no annotations), the description is quite complete. It explains the tool's autonomous behavior and provides examples. Could mention what happens if the question is ambiguous or unsupported, but not necessary for basic understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter 'question' described as 'Your question or request in natural language'. The description adds context by stating the input is 'plain English' and giving examples, which enhances understanding beyond the schema's minimal description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool accepts natural language questions and returns answers from the best data source, with specific examples that illustrate its purpose. Distinguishes itself from siblings like discover_tools (which likely lists tools) and recall/remember (which deal with memory).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use: when you need an answer from data without browsing tools or learning schemas. Provides examples and implies when not to use (e.g., if you need to directly invoke a specific tool). Contrasts with sibling tools like discover_tools (for tool discovery) and emoji_oracle_consult (for emoji-specific queries).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description notes the tool returns 'the most relevant tools with names and descriptions,' which is helpful but not detailed beyond that. With no annotations provided, the description carries full burden, but it lacks specifics on search behavior (e.g., ranking criteria, whether it's read-only, or if it modifies state). A score of 3 is appropriate as it adds some transparency but not exhaustive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the purpose, usage, and context. Every word serves a purpose, and there is no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 params, no output schema, no nested objects), the description is reasonably complete. It covers what the tool does, when to use it, and what it returns. However, it could briefly mention that it is a read-only operation or that it does not modify state, but this is not critical for a search tool. Score 4 reflects good completeness for the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the input schema already describes both parameters (query and limit). The description adds minimal extra meaning beyond what the schema provides; it explains the query parameter's purpose but doesn't add new constraints or examples beyond what's in the schema description. Baseline 3 is correct.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Search...catalog') and clearly states the resource ('Pipeworx tool catalog'). It distinguishes this tool from siblings by explicitly calling it the first step when many tools are available, contrasting with tools like 'ask_pipeworx' or 'emoji_oracle_consult'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear context and excludes alternative uses, guiding the agent effectively.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It only states 'Delete' which implies mutation, but does not clarify if deletion is permanent, whether it requires confirmation, or what happens to associated data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, zero waste. Front-loaded with action and object.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple delete tool with one parameter, the description is minimal but sufficient for basic understanding. However, lack of behavioral details (e.g., idempotency, error behavior) and no output schema leaves gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter 'key' described as 'Memory key to delete'. The description adds no further meaning beyond the schema, so baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Delete' and the resource 'stored memory by key', making the purpose obvious. It distinguishes from siblings like 'remember' (store) and 'recall' (retrieve), though not explicitly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'recall' or 'remember'. The description implies deletion, but does not state prerequisites or consequences.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses that omitting key lists all memories and retrieving by key returns a specific memory. Could be improved by noting if retrieval is read-only or has side effects, but overall clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with key verb 'Retrieve'. Concise and to the point. Could potentially add a note about the effect of the operation (e.g., no side effects) but efficient as is.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description implies return value (memory content or list of keys). Siblings are clear, and context is simple (1 optional param). Completeness is high for a retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (key parameter described in schema). Description adds value by explaining that omitting key lists all memories, which is not in schema. However, no additional constraints or formatting details are added.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a stored memory by key or lists all memories when key is omitted. It is specific and distinct from siblings like 'remember' (store) and 'forget' (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use this tool (to retrieve context saved earlier) and implies when to omit the key (to list all). However, it does not explicitly mention when not to use it or alternatives beyond the implicit sibling distinction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses behavioral traits: 'Authenticated users get persistent memory; anonymous sessions last 24 hours.' Since annotations are absent, this description adds crucial context about data persistence without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, no fluff. First sentence states core action, second provides usage guidance, third adds behavioral transparency. Front-loaded with essential purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (two required string params, no output schema, no nested objects), the description is complete. It covers what, when, and persistence behavior. No missing aspects for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers 100% of parameters with descriptions, so baseline is 3. The description reinforces purpose but does not add additional meaning beyond schema examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb ('Store'), resource ('key-value pair in session memory'), and scope ('intermediate findings, user preferences, or context across tool calls'). Differentiates from sibling 'recall' (retrieve) and 'forget' (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use ('save intermediate findings, user preferences, or context across tool calls'). While it doesn't explicitly mention when not to use or name alternative tools, the context signals indicate siblings 'recall' and 'forget' serve retrieval and deletion, providing implied guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.