Skip to main content
Glama

Server Details

startup-oracle MCP — wraps StupidAPIs (requires X-API-Key)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-startup-oracle
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 5 of 5 tools scored. Lowest: 3.3/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes: ask_pipeworx answers questions, discover_tools finds tools, and the memory trio manages stored context. However, ask_pipeworx and discover_tools both involve tool selection, which could cause minor confusion—though their descriptions clarify the difference.

Naming Consistency5/5

All tool names use a consistent verb_noun pattern (ask_pipeworx, discover_tools, forget, recall, remember). Naming is predictable and follows a clear convention.

Tool Count5/5

5 tools is well-scoped for the server's purpose: one core query tool, one tool discovery tool, and three memory management tools. Each tool earns its place without redundancy.

Completeness4/5

The tool set covers query, discovery, and memory CRUD (create, read, delete) but lacks an update memory operation. This is a minor gap since users can delete and recreate, or rely on the session-based persistence description.

Available Tools

5 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully discloses that the tool automatically selects the right sub-tool and fills arguments, which is critical behavioral context beyond the input schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (3 sentences plus examples) with front-loaded purpose, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (one free-text parameter) and no output schema, the description adequately explains the tool's behavior and expected output, though it could mention limitations or potential data sources.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the single parameter 'question' is described in the schema), and the description adds useful examples and context, but the parameter meaning is straightforward.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool takes a natural language question and returns an answer from the best available data source, distinguishing it from sibling tools that handle memory, discovery, or startup evaluation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit examples of what to ask and indicates no need to browse other tools, but does not explicitly state when not to use this tool or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It states that the tool returns 'most relevant tools with names and descriptions' but does not specify ordering, scoring, or whether the search is fuzzy/exact. However, the purpose is clear enough for an agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each with a distinct purpose: what it does, what it returns, when to call. Zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately covers purpose and usage. It could mention whether results include metadata or scoring, but the key information is present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by explaining that the query should be a 'natural language description' and providing examples. It also adds default and max for limit beyond schema's static description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb ('search'), resource ('Pipeworx tool catalog'), and purpose ('find the right tools for your task'). Distinct from sibling tools like ask_pipeworx (which answers questions) and recall/remember (which manage memory).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says to call this FIRST when many tools are available, providing clear when-to-use guidance. The example queries further illustrate usage. No explicit when-not-to-use, but the 'FIRST' directive implies precedence.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetBInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the burden of behavioral disclosure. It does not specify whether the operation is reversible, destructive (deletion is implied but not detailed), or what happens if the key doesn't exist (e.g., error vs. no-op). This lack of transparency could lead to misuse.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at six words, conveying the essential action and resource. It is front-loaded with the verb 'Delete', making it efficient. Slightly more context could be added without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one required parameter, no output schema, no annotations), the description is minimally sufficient. However, it lacks information about error handling or idempotency, which would be useful for a deletion tool. The sibling tools are distinct in purpose, so no confusion arises.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds no extra parameter info beyond the schema, which already describes the key parameter. However, with only one parameter and no enums or nested objects, the description is adequate; a score of 4 reflects that the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Delete' and the resource 'a stored memory by key', making the purpose immediately understandable. It distinguishes from siblings like 'remember' (create) and 'recall' (retrieve), though not explicitly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies that the tool is for deleting a specific memory, which is clear from the action and required key. However, it does not provide guidance on when not to use it or alternatives (e.g., forgetting a memory that doesn't exist), leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description reveals that omitting the key lists all memories, which is not apparent from the schema alone. It also states the persistence across sessions. Without annotations, it provides useful behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Front-loaded with verb and resource, immediately followed by the key usage pattern. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple (one optional parameter, no output schema). The description covers the main use cases: retrieval by key and listing all. It does not mention return format, but given no output schema, a 4 is appropriate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds meaning by explaining the effect of omitting the key (list all), which the schema only describes as 'omit to list all keys'. This is a slight improvement, raising the score to 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a memory by key or lists all memories if key is omitted. It specifies the resource (previously stored memory) and the action (retrieve/list), distinguishing it from 'remember' and 'forget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool ('retrieve context you saved earlier') and implies when not to (omit key for listing). It does not explicitly name alternative tools but contextually distinguishes from 'remember' and 'forget'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It discloses that authenticated users get persistent memory and anonymous sessions last 24 hours, which is helpful. However, it doesn't mention limits on number of keys, overwrite behavior, or whether duplicate keys are allowed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each with a clear purpose: what it does, when to use, and persistence behavior. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 string params, no output schema, no annotations), the description is fairly complete. It covers purpose, usage, and persistence. Minor gaps: no mention of max key/value length or overwrite behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by giving examples of keys ('subject_property', 'target_ticker') and explaining what values can be, but doesn't add new meaning beyond the schema's descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('store'), the resource ('key-value pair'), and the domain ('session memory'). It distinguishes from siblings like 'forget' and 'recall' by specifying it's for saving data, not retrieving or deleting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explains when to use (save findings, preferences, context) and provides context about persistence based on authentication. It doesn't explicitly mention when not to use or alternatives, but the examples and persistence info guide usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.