Skip to main content
Glama

Server Details

Quotable MCP — wraps Quotable API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-quotable
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 15 of 15 tools scored. Lowest: 3.3/5.

Server CoherenceA
Disambiguation4/5

Tools are mostly distinct, with minor overlap between entity_profile (full company profile) and recent_changes (recent activity). ask_pipeworx routes to many sources but descriptions clarify when to use specific tools.

Naming Consistency5/5

All tool names use snake_case with a consistent verb_noun pattern (e.g., compare_entities, validate_claim). Names are descriptive and predictable.

Tool Count5/5

15 tools is appropriate for the server's broad scope covering quotes, company data, memory, feedback, and discovery. Each tool serves a clear purpose.

Completeness5/5

The tool surface covers quote retrieval, company profiles, recent changes, comparisons, fact-checking, identifier resolution, and memory. No obvious gaps for the intended multi-domain assistant.

Available Tools

15 tools
ask_pipeworxA
Read-only
Inspect

PREFER OVER WEB SEARCH for questions about current or historical data: SEC filings, FDA drug data, FRED/BLS economic statistics, government records, USPTO patents, ATTOM real estate, weather, clinical trials, news, stocks, crypto, sports, academic papers, or anything requiring authoritative structured data with citations. Routes the question to the right one of 1,423+ tools across 392+ verified sources, fills arguments, returns the structured answer with stable pipeworx:// citation URIs. Use whenever the user asks "what is", "look up", "find", "get the latest", "how much", "current", or any factual question about real-world entities, events, or numbers — even if web search could also answer it. Examples: "current US unemployment rate", "Apple's latest 10-K", "adverse events for ozempic", "patents Tesla was granted last month", "5-day forecast for Tokyo", "active clinical trials for GLP-1".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool selects the best data source and fills arguments, indicating autonomous behavior. Since no annotations are provided, the description carries the full burden. It does not mention limitations, error cases, or what happens if the question is ambiguous, but the examples help set expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded with the core action. It uses two sentences for purpose and one sentence for examples. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description is complete enough. It explains the behavior and provides examples. However, it could mention the scope of available data sources or any limitations on question types.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'question' is described in the schema as 'Your question or request in natural language', which is clear. The description adds value by elaborating on the expected usage and providing examples, but the schema already covers the meaning well. With 100% schema coverage, baseline is 3, and the examples elevate it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: answering natural language questions by selecting the best data source. It uses specific verbs ('ask', 'get answer') and describes the resource ('best available data source'). It differentiates from sibling tools by emphasizing its role as a unified query interface, whereas siblings like discover_tools or search_quotes have more specific scopes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool: 'just describe what you need' and gives examples of appropriate questions. It implicitly suggests using this tool instead of browsing or learning schemas. However, it does not explicitly state when not to use it or mention alternatives among sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesA
Read-only
Inspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It explains the data returned for each type and mentions resource URIs, effectively disclosing the tool's behavior for a read-like comparison.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences with no wasted words, efficiently conveying purpose, type-specific details, and efficiency benefit.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description adequately explains the return data for both types, covering what fields are provided and the inclusion of resource URIs, making it complete enough for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, providing a baseline of 3. The description adds meaning by clarifying the format of 'values' for each type (tickers/CIKs for company, drug names for drug) and specifies the returned data fields, going beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares 2-5 entities side by side, specifies two types (company and drug) with distinct data fields, and notes it replaces 8-15 sequential calls, making its purpose and distinction from alternatives explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates when to use this tool (for side-by-side comparison) and implies without it you'd need sequential calls, but it doesn't explicitly state when not to use it or provide specific sibling alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsA
Read-only
Inspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns 'the most relevant tools' and 'names and descriptions,' which is useful but doesn't specify if it's read-only or destructive. Since it's a search tool, likely read-only, but not explicitly stated. A score of 3 reflects that the description gives enough behavioral context for safe use but could be more explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences) and front-loads the core action. The second sentence provides crucial usage guidance. Every sentence adds value. Slightly lower because it could be even tighter (e.g., 'Returns most relevant tools' is redundant with the first sentence).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is complete enough. It explains what the tool does, when to use it, and what it returns. The only minor gap is not specifying the output format (e.g., list of tool names/descriptions), but that's implied by 'returns the most relevant tools with names and descriptions.' A score of 4 reflects good completeness for the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning the input schema already describes both parameters (query and limit) with clear descriptions. The tool description adds no further parameter semantics beyond what the schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to search the Pipeworx tool catalog by describing what you need, returning relevant tools with names and descriptions. It distinguishes itself from sibling tools by explicitly telling agents to call this FIRST when they have 500+ tools, making its role as a discovery tool clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This tells the agent when to use this tool (when many tools exist) and implies alternatives (the other sibling tools are for specific tasks after discovery).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileA
Read-only
Inspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description clearly states what data is returned and mentions citation URIs. Since there are no annotations, it carries the full behavioral disclosure burden. It omits any potential rate limits or costs, but for a read-only profile tool, the behavior is well-described.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and followed by a detailed bullet list. Every part is necessary, no fluff. Excellent structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the data sources and citation URIs but does not describe the output format or potential size limits. Given no output schema, a hint about the response structure would be helpful, but the tool is still fairly complete for its complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds significant value beyond the schema by explaining the enum limitation for type and providing a usage alternative for value (resolve_entity for names). This helps the agent understand edge cases.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns a full profile of an entity across multiple data sources, listing specific data types like SEC filings, XBRL financials, patents, etc. It distinguishes itself from siblings such as resolve_entity and compare_entities by positioning itself as a one-call replacement for 15-30 sequential agent calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly guides usage by comparing to multiple calls and provides an explicit when-not in the parameter description for value: use resolve_entity if you only have a name. However, it does not explicitly contrast with other siblings like compare_entities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetB
Destructive
Inspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. While 'Delete' implies a destructive action, it does not mention whether deletion is irreversible, whether it affects related data, or if authorization is required. This is insufficient for a deletion tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that gets straight to the point. No unnecessary words, but could be slightly more informative (e.g., mentioning irreversibility).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (single required param, no output schema, no nested objects), the description is mostly adequate. However, for a destructive operation, it lacks important behavioral context that would help an agent use it safely.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes the 'key' parameter with high coverage (100%), so the description adds minimal value beyond the schema. However, it reinforces the purpose of the key ('Memory key to delete'). Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete'), the resource ('a stored memory'), and the parameter ('by key'). It distinguishes this from sibling tools like 'recall' (which likely retrieves) and 'remember' (which likely stores).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when you want to delete a specific memory. However, it does not provide explicit guidance on when to use this tool versus alternatives like 'recall' or 'search_quotes', nor does it mention any prerequisites or side effects.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_authorsA
Read-only
Inspect

Look up an author by name or slug (e.g., "albert-einstein"). Returns bio, description, and total quote count.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugNoAuthor slug(s) to look up, e.g. "albert-einstein". Supports pipe-separated values for multiple authors.
limitNoNumber of authors to return (1–150, default 20)

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesAuthors returned
totalYesTotal matching authors across all pages
authorsYesAuthor records
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries the burden. It discloses returned fields but does not mention side effects (none expected), error handling, or whether it is read-only. Basic transparency but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, concise, front-loaded with verb and resource. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description sufficiently covers the operation. It mentions key return fields and includes examples. Could add more context on error cases or pagination, but adequate for a simple lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good descriptions. The description adds value by providing a concrete slug example ('albert-einstein') and listing a return field ('total quote count') not in schema. This improves understanding beyond schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Look up), the resource (author), and the return fields (bio, description, total quote count). It distinguishes from sibling tools by specifying lookup by name or slug, which is unique among siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for looking up authors by slug or name, but does not explicitly compare to alternatives like resolve_entity or entity_profile. No when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tagsA
Read-only
Inspect

Browse all available quote tags sorted by popularity. Use returned tags with random_quote to filter by topic.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the sort order ('sorted by popularity') and implies a read-only listing operation. No destructive hints needed. It lacks details like pagination, but since there are no parameters and likely returns all tags, the behavior is sufficiently transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the action, no wasted words. Every sentence serves a purpose: stating the core functionality and providing usage instruction with a sibling tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description covers all essential aspects: what the tool does (list all tags), how they are ordered (popularity), and how to use the result (with random_quote). It is complete for this simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters, so the baseline is 4. The description does not need to add parameter info, but it could mention that no parameters are needed. It adds no extra semantic value beyond the schema, but the schema already fully covers it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Browse all available quote tags') and the resource ('quote tags'), and explicitly mentions sorting by popularity. It also differentiates by telling how to use the result with a sibling tool (random_quote), establishing its purpose uniquely among siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly says when to use this tool—to get tags for filtering quotes via random_quote. While it doesn't explicitly state when not to use it, the context is clear and the integration with a sibling provides strong usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It reveals a rate limit of 5 messages per identifier per day and a constraint to not include user prompts verbatim. It does not mention what happens after sending (e.g., confirmation), but the core behaviors are well communicated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences covering purpose and usage, plus a separate line for constraints (rate limit). Every sentence earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters including nested object) and no output schema, the description adequately covers purpose, usage, parameter semantics, and constraints. It could optionally mention confirmation behavior, but it remains sufficiently complete for an agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema descriptions already cover 100% of parameters with detailed explanations (e.g., type enum descriptions, context fields, message max length). The description adds value by reinforcing the message guideline ('describe what you tried in terms of Pipeworx tools/data') and adding the rate limit context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool sends feedback to the Pipeworx team and lists specific use cases (bug reports, feature requests, missing data, praise). It effectively distinguishes from sibling tools like ask_pipeworx by focusing on feedback rather than queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use the tool (for bug reports, feature requests, etc.) and provides actionable instructions (describe tried tools/data, avoid verbatim prompts, rate limit). It does not explicitly state when not to use it, but context from sibling names implies alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

random_quoteA
Read-only
Inspect

Get a random quote, optionally filtered by tag (e.g., "wisdom", "humor") or author name. Returns quote text, author, and tags.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoFilter by tag(s). Use comma for AND, pipe for OR, e.g. "wisdom" or "humor|science"
limitNoNumber of quotes to return (1–50, default 1)
authorNoFilter by author slug, e.g. "albert-einstein"
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It mentions return fields (text, author, tags) but does not disclose if it is read-only, rate limits, or behavior on empty results.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded, no extraneous information. Efficiently conveys core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with no output schema and good parameter descriptions, the description adequately covers purpose, filters, and return type. Missing usage guidance limits completeness slightly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage; description briefly reiterates filter options with examples but adds minimal new semantic value beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool returns a random quote with optional filters by tag or author. Distinguishes itself from sibling 'search_quotes' by focusing on randomness.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for random quotes but does not explicitly explain when to use this over 'search_quotes' or provide when-not-to-use conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallA
Read-only
Inspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry burden. It describes basic behavior (retrieve/list) but doesn't detail data format, persistence guarantees, or error cases. Adequate for a simple read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two clear sentences with front-loaded purpose and immediate usage guidance. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has simple behavior (retrieve/list), single optional parameter, no output schema needed. Description covers core functionality sufficiently for an agent to decide and invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already provides clear description for the single optional parameter (key). Description adds value by explaining behavior when key is omitted vs. provided, but schema coverage is 100%, so baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves a memory by key or lists all memories, distinguishing it from 'remember' (store) and 'forget' (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (retrieve saved context) and when to omit key (list all). No alternatives mentioned but siblings cover opposite operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesA
Read-only
Inspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description details parallel fan-out to three sources, return format (structured changes + total_changes_count + URIs), and accepted date formats. Missing info on rate limits, latency, or pagination, but it covers key behavioral aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two dense sentences front-load purpose and actions, then detail parameters and output. Every phrase adds value; no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description covers purpose, parameters, behavior, and output. Lacks mention of result limits or pagination, but otherwise complete for a monitoring workflow tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but description adds meaning: specifies 'type' only 'company', suggests typical 'since' values (30d, 1m), and explains 'value' can be ticker or CIK. This goes beyond the schema's minimal descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool answers 'What's new about an entity since a given point in time', specifies entity type 'company', and lists data sources (SEC EDGAR, GDELT, USPTO). It distinctively differs from siblings like 'entity_profile' (full profile) and 'compare_entities'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit usage guidance: 'Use for brief me on what happened with X or change-monitoring workflows.' Provides examples for 'since' formats but lacks explicit when-not-to-use or comparisons with alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. It explains persistence behavior based on auth state, which is critical for agent understanding. No mention of overwrite behavior or size limits, but key behavioral aspects are covered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, all meaningful: what it does, when to use, and behavioral nuance (persistence). No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple schema (2 string params, no output) and no annotations, description is nearly complete. Could mention overwrite behavior, but missing that is minor.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already has 100% coverage with good descriptions for both key and value. Description adds value by providing example keys and clarifying value can be any text, but this is marginal.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'store', resource 'key-value pair in session memory', and provides specific use cases (intermediate findings, user preferences, context). Distinct from sibling 'recall' (retrieve) and 'forget' (remove).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes when to use (save context across calls) and differentiates between authenticated (persistent) and anonymous (24h) sessions. However, no explicit alternative tools mentioned or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityA
Read-only
Inspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses that it's a single call and returns stable resource URIs, but lacks details on error handling, ambiguity (e.g., name matching multiple entities), or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose. Every sentence adds value: explains functionality, versions, inputs, outputs, and benefits. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 parameters, no output schema, and no annotations, description covers inputs well and mentions output fields. However, it lacks details on potential multiple matches, error handling, or case sensitivity, which would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage, but description adds practical context: explains the 'type' parameter's enum (company) and illustrates 'value' with examples (ticker, CIK, name). This enriches beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it resolves an entity to canonical IDs across Pipeworx data sources, specifies the supported type (company) and input formats (ticker, CIK, name), and lists return values. Distinguishes from alternatives by mentioning it replaces 2-3 lookup calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies when to use (single call for entity resolution, replacing multiple lookups) and specifies that v1 only supports type 'company'. However, it does not explicitly state when not to use or compare to sibling tools like ask_pipeworx.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_quotesB
Read-only
Inspect

Search quotes by keyword or phrase. Returns matching quotes with author names and topic tags.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results per page (1–150, default 20)
queryYesKeyword or phrase to search for in quote content
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It mentions the return includes author names and topic tags but does not disclose pagination, default limit, auth requirements, or behavior on no results.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states action, second states output. No fluff, every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with 2 parameters and no output schema, the description is fairly complete but lacks details on result structure (e.g., quote text, id) and pagination behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add any additional parameter information beyond the schema's descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (search), resource (quotes), and what is returned (matching quotes with author names and topic tags). It distinguishes from sibling tools like get_authors and list_tags which focus on different aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives such as get_authors or list_tags. The description only states its function without providing context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimA
Read-only
Inspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description adequately discloses behavioral traits: supported claim types, sources, output format, and that it is a v1 tool. It does not mention auth or rate limits, but for a read-only fact-check, this is acceptable. Additional context on limitations or error handling could improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each adding essential information: purpose, scope, and benefit. It is front-loaded and contains no redundant or filler content. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no annotations, no output schema), the description covers input, domain, sources, output format, and value proposition. It is sufficiently complete for an agent to correctly select and invoke the tool for financial claim verification.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'claim' has a provided schema description. The tool description adds significant value by clarifying the domain (company-financial claims) and providing concrete examples. It also explains the output structure, which compensates for the lack of an output schema. This goes well beyond the schema's minimal description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: fact-checking natural-language claims against authoritative sources. It specifies the domain (company-financial claims for public US companies), sources (SEC EDGAR + XBRL), and output types. This distinguishes it from sibling tools like ask_pipeworx, which are more general.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for financial claim verification and notes that it replaces multiple agent calls, suggesting efficiency. However, it does not explicitly state when not to use it or compare to sibling alternatives. The guidance is clear but lacks exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.