Skip to main content
Glama

httpcat

Server Details

HTTP Cat MCP — wraps http.cat (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-httpcat
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 9 of 9 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Tools are grouped into three clear areas: HTTP status cats, Pipeworx data queries, and session memory. Within each group, purposes are distinct, though ask_pipeworx and compare_entities could overlap for comparison queries.

Naming Consistency4/5

All tool names use snake_case and most follow a verb_noun pattern. A few tools (forget, recall, remember) use only a verb, but this is acceptable for simple actions.

Tool Count4/5

9 tools is a reasonable number, but the server name 'httpcat' implies a narrower focus, while the actual tool set includes a broader data platform, making the count slightly higher than expected for the stated purpose.

Completeness3/5

The HTTP cat tools are minimal (only get and list). The Pipeworx query tools cover basic needs (ask, compare, discover, resolve) but lack any CRUD operations. Memory tools are complete. The overall surface feels incomplete for a unified server.

Available Tools

13 tools
ask_pipeworxAInspect

Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it accepts natural language questions, automatically selects and invokes appropriate tools, and returns results. However, it doesn't mention limitations like rate limits, authentication needs, or potential errors. The description is helpful but could be more comprehensive about operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: the first sentence states the core functionality, the second explains the mechanism, the third provides usage guidance, and the fourth gives concrete examples. Every sentence adds value, with no redundancy or fluff. It's appropriately sized for a single-parameter tool with a straightforward purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (natural language processing with automatic tool selection) and lack of annotations/output schema, the description does well to explain the core functionality and usage. However, it doesn't address potential limitations, error conditions, or result formats. For a tool that abstracts multiple operations, more detail about what happens behind the scenes would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'question' well-documented in the schema as 'Your question or request in natural language'. The description adds minimal value beyond this, only reinforcing that questions should be 'in plain English'. It doesn't provide additional syntax, format, or constraint details, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'). It distinguishes from siblings by emphasizing natural language input versus structured tool selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It provides clear alternatives (implicitly suggesting other tools require browsing/learning schemas) and includes three concrete examples ('What is the US trade deficit with China?', 'Look up adverse events for ozempic', 'Get Apple's latest 10-K filing') that illustrate appropriate use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses data sources (SEC EDGAR, FDA), return format (paired data + URIs), and specific fields per type. It does not explicitly state read-only nature but implies it by describing a lookup/compare operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences efficiently cover purpose, constraints, return data, and efficiency. No redundant information. Front-loaded with primary action and entity count.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 params and no output schema, the description fully explains input formats, expected outputs per type, and return elements (paired data, URIs). Complete and self-contained.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but description adds significant context: explains how type determines returned fields, gives examples for values parameter, and clarifies format per entity type. This enhances understanding beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool compares 2-5 entities side by side and lists specific return data for company and drug types. It distinguishes itself from sibling tools by noting it replaces sequential calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use (comparing multiple entities efficiently) but does not explicitly state when not to use or mention alternative tools. The efficiency claim implies use over sequential calls.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool's behavior well ('Returns the most relevant tools with names and descriptions'), indicating a read-only search operation. However, it lacks details on error handling, performance, or authentication needs, leaving some behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by usage guidance, with zero wasted words. Both sentences earn their place by providing essential information efficiently, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search operation with 2 parameters), no annotations, and no output schema, the description is largely complete for its purpose. It covers what the tool does and when to use it, but could improve by hinting at output format or limitations, though not strictly required without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds minimal value beyond the schema by implying the query parameter's purpose ('by describing what you need'), but does not provide additional syntax or format details. Baseline 3 is appropriate as the schema handles most parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search the Pipeworx tool catalog'), the resource ('tool catalog'), and the method ('by describing what you need'), with explicit differentiation from siblings by emphasizing its search functionality versus list_codes or get_status_cat. It uses precise verbs like 'search' and 'returns' to define purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Call this FIRST when you have 500+ tools available and need to find the right ones for your task'), including a clear condition (large catalog) and alternative context (finding tools for a task). It effectively distinguishes usage from potential siblings by positioning it as an initial discovery step.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileAInspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It explains the tool returns pipeworx:// citation URIs and bundles multiple data sources. It mentions a limitation for federal contracts but does not discuss potential errors, rate limits, or data freshness. However, it provides adequate behavioral context for a read-only profile tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (one paragraph) and front-loaded with the purpose. It packs significant detail without excess verbiage. Slightly more structure (e.g., bullet points) could improve readability, but it is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of bundling multiple data sources, the description is fairly complete: it lists what data is returned, supported entity types, input formats, and alternatives. Output schema is missing, but the description mentions citation URIs. Minor gaps on pagination or result structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds value beyond schema: it clarifies that type only supports 'company' and that value must be a ticker or CIK, with explicit 'use resolve_entity first' for names. This directly helps agents avoid errors.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns a 'full profile of an entity across every relevant Pipeworx pack in one call' and lists specific data sources (SEC filings, XBRL data, patents, news, LEI). It also distinguishes itself by noting it replaces 10-15 sequential calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool (when needing a comprehensive profile) and when to use alternative: 'For federal contracts call usa_recipient_profile directly (too slow to bundle).' It also advises to use resolve_entity if only a name is available.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool deletes a stored memory, which implies a destructive mutation, but doesn't clarify if the deletion is permanent, reversible, requires specific permissions, or has side effects (e.g., error handling for invalid keys). For a mutation tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with zero wasted words. It is front-loaded with the core action ('Delete'), making it immediately understandable. Every part of the sentence earns its place by specifying the resource and method ('a stored memory by key').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive tool with no annotations and no output schema, the description is incomplete. It lacks details on behavioral traits (e.g., error handling, permanence), usage context (e.g., when to use vs. siblings), and output expectations. While the purpose is clear, the overall context for safe and effective use is insufficient given the tool's complexity as a mutation operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the parameter 'key' fully documented as 'Memory key to delete'. The description adds minimal value beyond this, only reiterating 'by key' without providing additional context like key format, examples, or constraints. Given the high schema coverage, a baseline score of 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and the resource ('a stored memory by key'), making the purpose immediately understandable. It distinguishes from sibling tools like 'recall' (likely retrieves) and 'remember' (likely stores), though it doesn't explicitly mention these alternatives. The specificity of 'by key' adds clarity about how the deletion is targeted.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. While the description implies deletion of stored memories, it doesn't specify prerequisites (e.g., the memory must exist), exclusions (e.g., cannot delete non-existent keys), or direct comparisons to siblings like 'recall' or 'remember'. The agent must infer usage from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_status_catAInspect

Get a cat image representing an HTTP status code. Provide the code (e.g., 200, 404, 500). Returns the image URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
status_codeYesHTTP status code (e.g., 200, 404, 500)

Output Schema

ParametersJSON Schema
NameRequiredDescription
image_urlYesURL to the http.cat image for this status code
descriptionYesHuman-readable description of the status code
status_codeYesThe HTTP status code requested
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the output behavior ('Returns a direct URL to a cat photo'), which is helpful, but doesn't mention error handling (e.g., for invalid codes), rate limits, or authentication needs. It adds some value but lacks comprehensive behavioral context for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste: the first states the purpose, and the second clarifies the return value. It's front-loaded and appropriately sized for a simple tool, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema, no annotations), the description is mostly complete. It covers purpose and output, but lacks error handling details or explicit sibling differentiation. For a simple lookup tool, this is sufficient but could be slightly enhanced for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'status_code' parameter with examples. The description adds no additional meaning beyond what's in the schema (e.g., it doesn't specify valid code ranges or format details). Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the http.cat image URL'), identifies the resource ('for a given HTTP status code'), and distinguishes from the sibling tool 'list_codes' by focusing on retrieving a single image URL rather than listing codes. It provides a complete picture of what the tool does.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning 'a given HTTP status code,' suggesting it's for when you have a specific code to look up. However, it doesn't explicitly state when to use this tool versus the sibling 'list_codes' (e.g., for browsing codes vs. getting an image for a known code) or provide any exclusions or prerequisites, leaving some guidance gaps.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_codesAInspect

Browse all available HTTP status codes with descriptions and cat image URLs. Use to find the right code for your status or explore available options.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
codesYesList of common HTTP status codes with cat images
countYesTotal number of common HTTP status codes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool lists codes with descriptions and image URLs, which is useful behavioral context. However, it does not mention potential limitations like rate limits, data freshness, or response format details, leaving some behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key information ('List common HTTP status codes') and adds necessary details ('with their descriptions and corresponding http.cat image URLs') without any wasted words. Every part of the sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no annotations, no output schema), the description is complete enough for its purpose. It clearly states what the tool does and what information it provides. However, without an output schema, it could benefit from more detail on the return format, but this is a minor gap for such a simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter information is needed. The description does not add parameter semantics, but this is acceptable as there are no parameters. A baseline of 4 is appropriate for tools with zero parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('List') and resources ('common HTTP status codes'), and distinguishes it from the sibling tool 'get_status_cat' by specifying it lists multiple codes with descriptions and image URLs, whereas the sibling likely retrieves a single code.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving a list of HTTP status codes with descriptions and image URLs, but does not explicitly state when to use this tool versus the sibling 'get_status_cat' or provide exclusions. The context is clear but lacks explicit alternatives or when-not guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses a rate limit ('5 messages per identifier per day') and states it's free. While it doesn't detail side effects or persistence, the feedback nature implies non-destructive behavior, and the rate limit is a key behavioral constraint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at three sentences, front-loading the primary purpose. Each sentence adds value: purpose, use cases, content guidelines, and rate limit. It could be slightly more structured, but it avoids unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a feedback tool with no output schema, the description covers purpose, usage, content guidance, and a behavioral constraint (rate limit). It lacks details on acknowledgment or what happens after submission, but the tool's simplicity means this is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers 100% of parameters with detailed descriptions, including enum options for 'type' and a nested object for 'context'. The description adds modest value by advising not to include the end-user's prompt verbatim in 'message', but this is supplementary rather than essential.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Send feedback to the Pipeworx team'. It lists specific use cases (bug reports, feature requests, missing data, praise) and distinguishes itself from sibling tools like ask_pipeworx and discover_tools by focusing on feedback rather than general queries or tool discovery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear when-to-use scenarios ('Use for bug reports, feature requests, missing data, or praise') and includes content guidelines ('Describe what you tried in terms of Pipeworx tools/data — do not include the end-user's prompt verbatim'). It also mentions rate limits but does not explicitly compare to sibling tools for when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the tool's core behavior (retrieving or listing memories) and persistence across sessions, which is valuable. However, it lacks details on error handling (e.g., what happens if a key doesn't exist), performance characteristics, or data format of retrieved memories, leaving some behavioral aspects unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and efficiently structured in two sentences: the first states the purpose and key usage, the second provides contextual guidance. Every sentence earns its place by adding critical information without redundancy, making it appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (one optional parameter, no output schema, no annotations), the description is largely complete. It covers purpose, usage, and parameter semantics effectively. However, without annotations or an output schema, it could benefit from more detail on return values (e.g., format of retrieved memories) to fully compensate for the lack of structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents the single parameter 'key' with its type and description. The description adds value by explaining the semantic effect of omitting the key ('omit to list all keys'), which clarifies the tool's dual functionality beyond what the schema alone provides, though it doesn't add syntax or format details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'), and distinguishes it from siblings like 'remember' (which presumably stores memories) and 'forget' (which presumably deletes them). It explicitly mentions retrieving context saved earlier in current or previous sessions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this to retrieve context you saved earlier in the session or in previous sessions.' It also specifies when to omit the key parameter ('omit key to list all keys'), giving clear operational instructions. This directly addresses when to use this tool versus alternatives like 'remember' for storage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesAInspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description fully bears the burden. Reveals parallel fan-out to SEC, GDELT, USPTO, explains return format (structured changes, total_changes count, URIs), and describes the 'since' parameter format. Does not mention any destructive side effects, which is consistent with a read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is a single compact paragraph of four sentences, front-loaded with purpose. No unnecessary words; every sentence adds substantive detail. Slightly dense but efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description compensates by detailing return fields (structured changes, total_changes, pipeworx URIs). Covers all parameters with usage examples. Provides sufficient context for an agent to invoke the tool correctly for typical monitoring or briefing tasks.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%. Description adds valuable context beyond schema: clarifies that 'type' currently only supports 'company', provides examples for 'since' (ISO and relative), and explains 'value' as ticker or CIK. This enriches the agent's understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: reporting what's new about an entity since a given time, with a specific example type 'company'. It distinguishes from siblings by focusing on change detection and data source fan-out, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases: 'brief me on what happened with X' or change-monitoring workflows. Gives examples of acceptable relative time formats (7d, 30d, 3m, 1y) and suggests '30d' or '1m' for typical monitoring. Does not state when not to use or name alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a storage operation (implying mutation), specifies persistence differences ('authenticated users get persistent memory; anonymous sessions last 24 hours'), and hints at session scope. However, it doesn't cover potential errors, rate limits, or exact response behavior, leaving some gaps for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core action, followed by usage context and behavioral details. Every sentence adds value without redundancy, making it efficient and easy to parse for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 required parameters, no output schema, no annotations), the description is largely complete. It covers purpose, usage, and key behavioral aspects like persistence. However, without annotations or an output schema, it doesn't detail return values or error handling, which could be beneficial for full transparency. It compensates well but has minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('key' and 'value') with examples. The description adds minimal value beyond the schema by implying the parameters are used for storage but doesn't provide additional syntax, constraints, or usage details. This meets the baseline of 3 when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('store a key-value pair') and resource ('in your session memory'), distinguishing it from siblings like 'recall' (retrieval) and 'forget' (deletion). It explicitly mentions use cases ('save intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), but it does not explicitly state when not to use it or name alternatives. For example, it doesn't clarify if 'recall' should be used for retrieval instead, though this is implied. The guidance is helpful but lacks explicit exclusions or sibling comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the output (ticker, CIK, company name, pipeworx:// URIs) and accepted input formats. It does not mention error handling, rate limits, or idempotency but adequately describes the core behavior for a read-only lookup.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first establishes purpose and scope, second details version, input formats, output, and benefit. Every sentence provides essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema or annotations, the description fully explains input constraints, output fields, and the tool's value proposition. For a simple 2-parameter tool with clear behavior, no additional details are necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description adds value by providing concrete examples (AAPL, Apple), explaining the version constraint (v1 only supports 'company'), and clarifying the value parameter accepts multiple formats.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Resolve' and the resource 'entity', specifies it works across Pipeworx data sources, and provides concrete examples (AAPL, CIK, Apple). It distinguishes from sibling tools like ask_pipeworx or list_codes by its specific focus on entity resolution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by noting it replaces 2–3 lookup calls and giving input format examples. However, it does not explicitly state when not to use it or compare with alternatives. The context is clear but lacks exclusion guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimAInspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full transparency burden. It discloses the authoritative sources (SEC EDGAR + XBRL), possible verdicts, and output format (citation, delta). It does not mention side effects or destructive actions (none expected), but could explicitly state non-destructive nature or rate limits. Overall adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loaded with purpose, scope, and outputs. Every sentence adds value without redundancy, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given one simple parameter and no output schema, the description covers the tool's core functionality, supported domains, and output structure. It lacks details on error handling or edge cases, but is sufficient for most intents. Slight room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the 'claim' parameter already described as a natural-language factual claim with examples. The main description does not add additional parameter semantics beyond what the schema provides, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it fact-checks natural-language claims against authoritative sources, details supported claim types (company-financial), and lists return values (verdict, structured form, value with citation, delta). It distinguishes itself from siblings by specifying its focused scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly limits support to company-financial claims ('v1 supports company-financial claims'), implying when not to use. It also notes that it replaces 4–6 sequential agent calls, providing efficiency context. However, it does not explicitly mention alternatives or when to avoid this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.