Skip to main content
Glama

Server Details

iss-number MCP — wraps StupidAPIs (requires X-API-Key)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-iss-number
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 5 of 5 tools scored. Lowest: 3.6/5.

Server CoherenceB
Disambiguation3/5

The tools ask_pipeworx and discover_tools both deal with finding information, which could cause confusion: ask_pipeworx answers questions using any tool, while discover_tools only searches the tool catalog. The memory tools (remember, recall, forget) are distinct from each other and from the query tools.

Naming Consistency1/5

Tool names are inconsistent: ask_pipeworx uses a verb+product name format, discover_tools uses verb_noun, while forget, recall, and remember are single verbs with no nouns. No consistent pattern across the set.

Tool Count5/5

With 5 tools, the set is well-scoped for a memory and query assistant. Each tool serves a clear purpose without being too few or too many.

Completeness3/5

The memory tools cover CRUD (remember, recall, forget) but lack an update tool. The query tools are somewhat overlapping, and there is no tool to list all available data sources or manage them.

Available Tools

7 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses it picks the right tool and fills arguments, returning the result. No annotations provided, so description carries burden. Lacks specifics on potential side effects or limitations, but transparent about its delegation behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with the core purpose. Each sentence adds value. Could be slightly more concise but well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter schema and no output schema, the description is sufficiently complete. It explains the behavior and provides examples, covering the user's needs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the single parameter 'question' is described in the schema. The description adds natural language examples, clarifying the parameter's purpose beyond the schema's minimal description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it answers plain English questions using the best data source, with examples. It differentiates from sibling tools like discover_tools, forget, recall, remember which handle tool discovery or memory, not question answering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says to just ask in plain English and provides examples. Implicitly contrasts with browsing tools or learning schemas, but does not explicitly say when not to use it. Still clear usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 entities side by side in one call. type="company": revenue, net income, cash, long-term debt from SEC EDGAR. type="drug": adverse-event report count, FDA approval count, active trial count. Returns paired data + pipeworx:// resource URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It details data sources (SEC EDGAR, FDA) and return format (paired data + pipeworx:// URIs), but omits side effects, authentication needs, or limitations like data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no wasted words. First sentence states the core purpose, second details type-specific data, third mentions return format and efficiency benefit. Front-loaded and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a two-parameter tool with no output schema, the description covers core functionality well (data sources, return format, efficiency). Could mention data freshness or caching, but overall complete for the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with adequate parameter descriptions. The description adds value by specifying the data fields returned for each entity type, enriching the semantic meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares 2-5 entities side-by-side with specific data fields for company (revenue, net income, cash, long-term debt) and drug (adverse-event report count, FDA approval count, active trial count) types, distinguishing it from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains it replaces 8-15 sequential agent calls, implying efficiency, but does not explicitly mention when not to use or alternative tools. Context is clear, but exclusions are missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral aspects. It states that it 'Returns the most relevant tools with names and descriptions,' which is helpful but lacks details like whether it's read-only, any rate limits, or how 'relevance' is determined. It does not disclose if the tool modifies any state, but given its search nature, that's likely safe.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences) and front-loaded with the action. Every sentence adds value: the first states purpose, the second gives usage guidance. It could be slightly improved by removing redundancy, but it is well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (search over catalog, 2 parameters, no output schema), the description is largely complete. It explains what the tool does, when to use it, and what returns. However, it lacks information about pagination or if the catalog is local or remote, but these are minor gaps for a discovery tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description adds limited value beyond the schema. The description mentions 'search' and 'find,' but the schema already describes 'query' and 'limit' well. The example in the query description ('analyze housing market trends') provides concrete usage, which is helpful, but the tool description itself doesn't add new parameter meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose: 'Search the Pipeworx tool catalog by describing what you need.' It clearly identifies the action (search) and the resource (tool catalog), distinguishing it from siblings like 'ask_pipeworx' (which answers questions) and 'recall'/'remember' (which deal with memory).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides strong usage guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' It explicitly tells the agent when to use this tool (first, when many tools exist) and implies it should precede other tool selections.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. States it deletes a memory, which implies a destructive action. Does not disclose any side effects, permanence, or authorization needs. Adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise: a single sentence that clearly conveys purpose. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple delete operation with one required parameter and no output schema, the description is nearly complete. It could mention that the memory is permanently deleted, but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents the key parameter. The description adds no further meaning beyond the schema's description. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it deletes a stored memory by key. Verb 'Delete' and resource 'stored memory' are specific. Siblings include 'recall' and 'remember', which are retrieval/storage, so 'forget' is differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when you want to remove a memory, but no explicit guidance on when to use alternatives (recall, remember). No exclusions or prerequisites stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states that the tool retrieves stored memories and that omitting the key lists all memories. This is clear about the read-only nature (no mention of modification). It does not mention potential performance implications of listing all memories, but given the simplicity, this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise: two sentences, no redundant information. The first sentence states the action, the second gives usage context. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple (one optional parameter, no output schema, no annotations). The description covers the essential behavior: retrieve by key or list all. It mentions session persistence ('saved earlier in the session or in previous sessions'). There is no need for more detail given the low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (only one parameter). The description adds context by explaining that omitting the key lists all stored memories, which is not explicitly stated in the schema description ('Memory key to retrieve (omit to list all keys)' already implies this). However, the description clarifies the retrieval purpose and ties it to session context, adding value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action ('Retrieve a previously stored memory by key, or list all stored memories') and identifies the resource ('memory'). It also distinguishes between two modes of operation (specific key vs. listing all). The sibling tools include 'remember' and 'forget', so 'recall' is clearly differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says when to use the tool: 'Use this to retrieve context you saved earlier in the session or in previous sessions.' It implicitly excludes the alternative of using 'remember' or 'forget'. However, it does not explicitly state when not to use it or mention specific alternatives by name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses key behavioral traits: that it stores key-value pairs, that it's session-scoped, and that persistence differs by authentication status (authenticated persistent, anonymous 24 hours). This provides good transparency beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no wasted words. Each sentence provides distinct value: what it does, when to use it, and a behavioral note on persistence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 required string params, no output schema, no nested objects), the description covers the key aspects: purpose, usage, and behavioral nuance (persistence). No major gaps identified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents both parameters with descriptions. The description adds usage context but does not add meaning beyond the schema's parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores a key-value pair in session memory, with specific examples of use cases (saving findings, preferences, context). This distinguishes it from siblings like 'forget' (removal) and 'recall' (retrieval).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly mentions when to use it ('save intermediate findings, user preferences, or context across tool calls') and notes persistence differences between authenticated and anonymous users. However, it doesn't explicitly exclude when not to use it or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Resolve an entity to canonical IDs across Pipeworx data sources in a single call. Supports type="company" (ticker/CIK/name → SEC EDGAR identity) and type="drug" (brand or generic name → RxCUI + ingredient + brand). Returns IDs and pipeworx:// resource URIs for stable citation. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. It explains behavior (returns ticker, CIK, name, URIs) but omits failure handling, case sensitivity, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each essential: purpose, input specifics, and benefit. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-param tool with no output schema, description covers inputs, outputs, and use case. Could specify output format details or error handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers both parameters fully; description adds concrete examples (AAPL, 0000320193, Apple) that enhance meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool resolves entities to canonical IDs, specifies the supported type (company), and distinguishes from siblings like ask_pipeworx and memory tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes accepted inputs (ticker, CIK, name) and notes it replaces 2-3 calls, but lacks explicit when-not-to-use or comparison to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.