Skip to main content
Glama

Bis

Server Details

BIS MCP — Bank for International Settlements statistics (no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-bis
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 14 of 14 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clear, distinct purpose: ask_pipeworx handles general Q&A routing, compare_entities performs side-by-side comparisons, discover_tools helps find tools, entity_profile provides comprehensive company profiles, etc. No two tools overlap in functionality.

Naming Consistency4/5

Most tool names follow a verb_noun pattern (e.g., compare_entities, fetch_dataset, resolve_entity). A few exceptions like entity_profile and recent_changes are noun-based, and the memory tools (forget, recall, remember) are single verbs. Overall pattern is consistent with minor deviations.

Tool Count5/5

With 14 tools covering business intelligence, BIS data, memory, feedback, and claim validation, the count is well-scoped. Each tool serves a necessary function without redundancy or bloat.

Completeness5/5

The tool set covers key operations: entity resolution, profiling, comparison, changes tracking, claim validation, BIS data flows, and a general Q&A router. There are no obvious gaps for the intended business intelligence domain.

Available Tools

14 tools
ask_pipeworxAInspect

Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears the full burden. It mentions that Pipeworx picks the right tool, fills arguments, and returns the result, but lacks detail on behavior like error handling, latency, authentication needs, or read-only nature. This is adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two paragraphs with clear front-loading: first sentence states purpose, then usage guidelines and examples. It is fairly concise with no wasted words, though slightly longer than necessary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a single parameter, no output schema, and no annotations, the description covers purpose, when to use, and example queries. It mentions the breadth of sources, which is sufficient context for a straightforward query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with one 'question' parameter described generically. The description adds significant meaning by providing examples of question types and mentioning the range of sources, which helps the agent understand what constitutes a valid query.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool answers natural-language questions by picking the right data source, with specific verb 'Answer' and resource 'natural-language question'. It distinguishes from sibling tools by highlighting automatic routing across 300+ sources, avoiding the need to figure out which Pipeworx tool to call.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use ('Use when...') and provides example queries that trigger usage. It implies alternatives by noting you don't need to figure out which tool to call, but does not explicitly mention when not to use or list specific sibling alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Explains data sources (SEC EDGAR/XBRL, FAERS) and returned metrics but omits behavioral details like authorization needs, error handling, or pagination. Partially adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is front-loaded with core purpose, then usage triggers, then type-specific details. Efficient overall but could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists; description partially explains returns ('paired data + citation URIs'). Covers main two modes but lacks details on input format specifics (e.g., tickers vs CIKs) and error scenarios. Adequate but not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%. Description adds value by providing examples and clarifying data sources per parameter type beyond the schema's basic descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares 2–5 companies or drugs side by side, uses specific verbs ('Compare'), and distinguishes from siblings like entity_profile (single entity) and fetch_dataset (generic data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists user query triggers ('compare X and Y', 'X vs Y', etc.) and differentiates between company and drug types. Does not explicitly mention alternatives among siblings but covers usage context well.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description bears full responsibility. It indicates read-like behavior and mentions 'top-N' results, but lacks details on authentication, result ordering, or pagination. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficient and front-loaded, but the list of domains makes it slightly longer than necessary. Still, every sentence contributes meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema), the description covers purpose, usage, parameter semantics, and strategic context thoroughly. No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline 3. Description adds value by explaining query examples ('analyze housing market trends') and hinting at limit's role ('top-N'), going beyond schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs like 'browse, search, look up, discover' and lists numerous data domains. It distinguishes itself from sibling tools by targeting tool discovery, not data operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises when to use the tool ('when you need to browse, search, look up, or discover what tools exist') and provides a strategic recommendation to call this first when many tools are available, indicating when not to use it for direct answers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileAInspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses exactly what data is returned (SEC filings, fundamentals, patents, news, LEI) and mentions the output includes pipeworx:// citation URIs. The limitation of only 'company' type supported is stated. Missing explicit statement about read-only behavior (assumed safe), but safe given no destructive hint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately long but well-structured: it starts with the core purpose, lists use cases, summarizes returns, and ends with input format. Every sentence adds value, though some readability could be improved by splitting into shorter sentences. Still efficient for the information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description thoroughly explains the return values (SEC filings, fundamentals, patents, news, LEI) and mentions citation URIs. It covers limitations (only company type, names not supported) and provides alternative for name resolution. The complexity of aggregating multiple data sources is appropriately addressed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by explaining the 'type' enum (only 'company' supported, others coming soon) and that 'value' must be a ticker or CIK, not a name. It also directs to resolve_entity for name inputs, which is helpful beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get everything about a company') and resource ('in one call'), clearly distinguishing it from sibling tools by noting it replaces calling 10+ pack tools across multiple domains. Examples of user queries ('tell me about X', 'research Microsoft') further clarify its purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use (user asks for company profile) and when not to use (if only have a name, use resolve_entity first). It provides clear input constraints (ticker or CIK only) and mentions an alternative sibling tool by name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fetch_datasetAInspect

Fetch tidy rows from a BIS dataflow. flow_ref examples: "BIS,WS_CBPOL_D,1.0" (daily policy rates). The key string is a dot-separated dimension filter (e.g., "D.US" — frequency.country). Use start_period / end_period like "2020", "2020-Q1", "2020-01".

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoDot-separated dimension key (empty for all)
limitNoCap rows (default 5000)
flow_refYesSDMX dataflow reference
end_periodNoInclusive end
start_periodNoInclusive start
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must fully disclose behavior. Mentions 'tidy rows' but lacks details on idempotency, limits, caching, or auth. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, each sentence adds value. Could be slightly more structured but efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 params and no output schema, description provides sufficient context to use correctly. Could add more about return format or pagination, but not strictly required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3). Description adds concrete examples for flow_ref, key, and period formats, which adds meaning beyond the generic schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Fetch tidy rows from a BIS dataflow' with specific verb and resource. Provides examples of flow_ref and key format, and distinguishes from sibling tools like search_dataflows which are for metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Includes parameter usage examples for start_period/end_period and key format, guiding proper invocation. Does not explicitly mention when to use versus alternatives, but context with siblings implies uniqueness.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Clearly indicates the destructive nature ('delete') and implies the memory must exist ('previously stored'). However, does not specify behavior if the key does not exist (e.g., error or no-op). For a simple tool, this is minor but could be improved.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences: action, usage context, sibling pairing. No extraneous information; front-loaded with verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (1 param, no output schema), the description covers purpose, usage, and how it fits with sibling tools. It is complete for an agent to understand when and why to use it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the key parameter described as 'Memory key to delete'. The description does not add additional parameter semantics beyond the schema, so baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Delete' and the resource 'a previously stored memory by key'. It distinguishes itself from siblings 'remember' (store) and 'recall' (retrieve) by focusing on deletion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides when to use: 'when context is stale, the task is done, or you want to clear sensitive data'. Also pairs with 'remember' and 'recall' for complete workflow, giving clear context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_curated_flowsAInspect

List BIS dataflow refs we have pre-vetted, grouped by topic (rates, fx, banking, debt, credit, property, derivatives, finance). Use the flow_ref with fetch_dataset. For everything else use search_dataflows or browse https://stats.bis.org.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicNoOptional topic filter
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should detail behavioral traits like side effects, data freshness, or pagination. It only mentions 'pre-vetted' but does not explain implications. A list operation is inherently safe, but transparency could be improved.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. The first sentence states the action and grouping, the second provides usage guidance and alternatives.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with one optional parameter and no output schema, the description covers purpose, grouping, and usage guidance. It does not detail the return format, but that may be inferred from context. Slightly lacking in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% coverage with one optional 'topic' parameter. The description adds value by listing valid topic values (rates, fx, etc.), though it does not explicitly say the parameter filters by these values. The agent can infer the connection.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists pre-vetted BIS dataflow refs grouped by topic, and explicitly lists the topics. It distinguishes itself from sibling tools by directing users to search_dataflows for other needs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use (for pre-vetted BIS dataflows) and when-not-to-use (everything else, recommending search_dataflows or a URL). This gives clear guidance for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses rate limits (5/identifier/day), free usage (no quota impact), team reading frequency (daily digests), and roadmap impact. It also warns about what not to include (end-user prompts), ensuring proper usage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with purpose, then usage, then behavior. While thorough, a few sentences could be tightened (e.g., the purpose is restated in the type descriptions). Still, it remains efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (3 params, no output schema, no annotations), the description fully covers purpose, usage, parameter semantics, limitations, and impact. Missing a confirmation note is minor; the description is sufficiently complete for an agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds meaning by explaining enum values in context, clarifying the optional context object's fields (pack, tool, vertical), and advising on message formatting (specific, 1-2 sentences, 2000 chars max). This goes well beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Tell the Pipeworx team something is broken, missing, or needs to exist.' It specifies the verb-resource relationship and distinguishes it from data-retrieval siblings by focusing on feedback. Examples of scenarios (bug, feature, data_gap, praise) add clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly defines when to use the tool: for bugs, feature requests, data gaps, or praise. It advises against pasting end-user prompts and provides rate-limit and quota information, giving clear contextual guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description covers key behavioral traits: it is a read operation ('retrieve'), scoped to user identifier, and returns either a saved value or list of keys. Does not detail error behavior for missing keys, but adequate for a simple retrieval tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with the core action and use case. Every sentence adds value (purpose, usage example, scoping, pairing). Could be slightly shorter but no extraneous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains return behavior (value or list of keys). Covers scoping, pairing with related tools, and usage context. Complete enough for a simple memory retrieval tool with one optional parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with a clear description for the single parameter 'key.' The description adds context (omit to list all keys) but does not provide additional semantic meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb 'retrieve' and the object 'value previously saved via remember.' Distinguishes from siblings by explicitly naming 'remember' and 'forget' and describing behavior when key is omitted (list all keys).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage examples (ticker, address, research notes) and hints at when to use ('without re-deriving from scratch'). Mentions scoping but does not list explicit alternatives or when-not-to-use conditions beyond the context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesAInspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses the parallel fan-out to three data sources (SEC EDGAR, GDELT, USPTO) and the return structure (structured changes, count, URIs). Since no annotations are provided, the description carries the full burden and handles it well. No side effects or destructive actions implied.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly written sentences, no redundancy. Front-loads the purpose and then provides essential detail. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description adequately explains the return values (structured changes, total_changes count, URIs). It covers the main aspects an agent needs to know. Could mention any limitations or error conditions, but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds extra clarity by detailing the 'since' parameter format with examples (ISO dates, relative shorthand like '7d', '30d'), and provides concrete examples for 'value' (AAPL, CIK). This goes beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool's function as retrieving recent changes for a company, with specific example queries and explicit differentiation from static profile tools like entity_profile. The description is unambiguous and context-rich.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage guidance with example user intents ('what's happening with X?', 'any updates on Y?') and even example queries. Does not explicitly state when not to use, but the context is clear enough. Lacks direct sibling comparison, but the examples effectively convey appropriate scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description discloses key behaviors: key-value storage, scoping by identifier, persistence differences between authenticated and anonymous users. Does not state overwrite behavior on duplicate keys, but overall sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is a single paragraph with no wasted words. Each sentence adds critical information: purpose, usage cues, scoping, persistence, and sibling tools. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple two-parameter tool with no output schema, the description provides all necessary context: how data is stored, retention policies, scope, and relationships with siblings. Nothing essential is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters with descriptions, but description adds value by providing concrete examples for 'key' (e.g., 'subject_property') and clarifying 'value' accepts any text. This helps the agent understand usage beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Save data the agent will need to reuse later' with specific examples like 'resolved ticker, target address'. It distinguishes from siblings by mentioning recall and forget, making the tool's purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use when you discover something worth carrying forward' and explains scoping and persistence. No explicit 'when not to use' but context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations; description compensates by stating it returns IDs and citation URIs, and that it replaces multiple calls. As a lookup, non-destructive is implied.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise four sentences: main purpose, details, usage instruction, efficiency note. Front-loaded with verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; description fully explains return values (IDs + citation URIs) and workflow placement. Complete for a lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters; description adds value by giving examples of valid inputs for `value` (ticker, CIK, name for company; brand or generic for drug) and clarifies the `type` enum.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool resolves entities to canonical identifiers (CIK, ticker, RxCUI, LEI) and provides examples. It distinguishes from siblings by emphasizing use before other tools needing IDs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use (when user mentions a name needing a specific ID system) and instructs to use before other tools. Lacks explicit 'when not to use', but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_dataflowsAInspect

Search the BIS SDMX dataflow registry by keyword. Returns flow_refs ready to pass to fetch_dataset.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 25, max 100)
queryYesKeyword (matches dataflow name)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description carries burden. It correctly implies a read-only search but does not explicitly state safety or side effects. Adequate for a simple search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with action and resource, zero unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Mentions return type (flow_refs) and next step (pass to fetch_dataset). Lacks details on pagination or ordering, but sufficient for the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so description adds minimal value beyond schema. Restates 'keyword matching dataflow name' matching query param description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Explicit verb 'search' and resource 'dataflow registry' with clear output 'flow_refs ready to pass to fetch_dataset'. Distinguishes from sibling tools like fetch_dataset and list_curated_flows.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage chain (search then fetch) by describing output format. No explicit when-not-to-use or alternative guidance, but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimAInspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the output format (verdict, structured form, actual value with citation, percent delta) and notes it replaces 4-6 calls. It does not mention side effects, idempotency, or logging, but transparently describes the behavior well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, front-loaded with the action, and every sentence provides necessary information without redundancy. Four sentences cover purpose, usage, scope, and output efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema or annotations, the description is remarkably complete. It explains precisely what inputs are expected, how they are processed, what outputs are returned, and the domain scope. No critical gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and already describes the 'claim' parameter. The description adds value by specifying the supported claim scope (v1 company-financial claims via SEC EDGAR + XBRL), which goes beyond the schema's general description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: fact-checking natural-language claims against authoritative sources, specifically company-financial claims. It distinguishes from siblings like 'entity_profile' and 'resolve_entity' by being a specialized validation tool that replaces multiple sequential calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use ('when an agent needs to check whether something a user said is true') and provides examples. It also confines usage to company-financial claims, providing clear context, but does not mention when not to use or compare to specific sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.