Skip to main content
Glama

Dbnomics

Server Details

DBnomics MCP — meta-aggregator over 80+ stats providers

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-dbnomics
GitHub Stars
0
Server Listing
dbnomics

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 16 of 16 tools scored. Lowest: 3.4/5.

Server CoherenceA
Disambiguation4/5

While most tools have distinct purposes, some overlap exists between ask_pipeworx (general Q&A), validate_claim (fact-checking), compare_entities (comparisons), and entity_profile (company profiles). However, descriptions clarify their specific use cases, reducing ambiguity.

Naming Consistency4/5

Tool names follow a consistent snake_case pattern, but the verb forms vary (e.g., 'ask_pipeworx', 'compare_entities' vs. 'forget', 'search'). The naming is mostly predictable, with minor deviations from a strict verb_noun pattern.

Tool Count4/5

16 tools is slightly above the typical well-scoped range, but each serves a clear purpose in the domain of economic/financial data retrieval and analysis. The count feels appropriate for the server's breadth without being excessive.

Completeness4/5

The tool surface covers core workflows: data discovery (list_providers, search), retrieval (get_series), entity resolution (resolve_entity), profiles (entity_profile), comparisons, fact-checking, and memory. Minor gaps like bulk download are absent but not critical for its stated purpose.

Available Tools

16 tools
ask_pipeworxAInspect

Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It mentions 'routes across...' but does not disclose how routing works, what happens if no match, data freshness, error handling, or any side effects. Lack of behavioral details is a gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is front-loaded with purpose, then usage guidance and examples. It is concise but informative, with no redundant sentences. Could be slightly more terse, but it effectively communicates the tool's value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description should clarify return format. It says 'returns the result' but not the structure. It covers many sources but doesn't explain how results are presented or pagination. Examples help, but gaps remain for a complex router.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only one parameter (question) with schema description 'Your question or request in natural language'. Description adds value beyond schema by providing specific examples and explaining the scope of queries it handles. Since schema coverage is 100%, baseline 3, and description improves it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it answers natural-language questions by routing to the correct data source. Examples and list of sources make the purpose unambiguous. It distinguishes from siblings like search or find_series by being a high-level router.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (natural-language questions like 'What is X?', 'Find Y') and that it avoids needing to manually pick a tool. Does not explicitly state when not to use or mention alternatives, but the context implies it is for broad queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses data sources (SEC EDGAR/XBRL for companies, FAERS for drugs) and output includes citation URIs. Could be improved by mentioning data freshness or limitations, but no contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every sentence contributes value. Purpose is front-loaded, no wasted words. Length is appropriate for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (two entity types, multiple data fields), the description covers the main behaviors and output. No output schema, so description adequately describes return values. Minor omission: no mention of error cases or rate limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage, but description adds significant meaning: explains what each type retrieves and provides examples for values. It also specifies the data sources, which is beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it compares 2-5 companies or drugs side by side, with specific verbs and resources. It distinguishes from siblings by noting it replaces 8-15 sequential agent calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists trigger phrases like 'compare X and Y' and 'X vs Y', and specifies when to use company vs drug type. However, it does not mention scenarios where this tool should not be used (e.g., comparing a single entity).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It does disclose the output: 'Returns the top-N most relevant tools with names + descriptions.' However, it lacks details about potential behaviors such as rate limits, caching, authentication, or whether the search is deterministic. A 3 is appropriate as it adds some transparency but not full depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with a clear purpose and usage guidance. It contains a useful list of domains but is slightly verbose. Every sentence adds value, though the list could be trimmed without loss of clarity. Still, it is efficient for the information it conveys.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given there is no output schema, the description adequately explains what is returned. It covers both parameters with context, and provides usage scenario (first call when many tools). The tool is straightforward (discovery/search) and the description fills the gaps for an agent to invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and the description adds value beyond the schema. It explains the 'limit' parameter defaults (20, max 50) and reinforces that 'query' is natural language. The description also clarifies the nature of the return (top-N most relevant) which is not in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does: 'Find tools by describing the data or task.' It uses a specific verb+resource and distinguishes itself by listing many domains (SEC filings, financials, etc.) and indicating it returns top-N relevant tools with names and descriptions. This contrasts with sibling tools that likely have more specific purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Call this FIRST when you have many tools available and want to see the option set (not just one answer).' It also provides a list of domains to hint at appropriate contexts, effectively guiding when to use it over other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileAInspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses all returned data types and the use of pipeworx:// citation URIs, but could mention potential limits or pagination.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with purpose, but single paragraph could be broken into multiple sentences for readability. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description explains return data comprehensively. Could mention if results are paginated or limited, but covers the tool's main value well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters. Description adds value by clarifying that type only supports 'company' and value must be ticker or CIK (not name), and recommends resolve_entity for name lookup.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Starts with 'Get everything about a company in one call,' a clear verb+resource. Lists specific outputs (SEC filings, fundamentals, patents, news, LEI) and distinguishes from siblings by noting it replaces 10+ pack tool calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides when-to-use examples (user asks 'tell me about X', 'research Microsoft') and when-not-to-use guidance (names not supported, use resolve_entity first).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_seriesAInspect

Browse series with structured filters. Useful when you know the provider+dataset and want to enumerate series by dimensions.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo1-1000 (default 100)
offsetNo0-based offset
datasetYesDataset code
providerYesProvider code
dimensionsNoOptional dimension filter map, e.g. {"FREQ":"A","COUNTRY":"DE"}
observationsNoInclude observations (default false to save bandwidth)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden of disclosure. It implies a read operation ('browse', 'enumerate') and mentions bandwidth saving via the observations parameter, but lacks details on side effects or limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One clear, concise sentence with no wasted words. It efficiently conveys purpose and usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers key aspects (provider, dataset, dimensions) for a browse tool. However, with no output schema, it could mention the return format. Still, it is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, so the description adds little beyond reinforcing the required parameters. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool browses series with structured filters and specifies when it is useful, distinguishing it from siblings like get_series and search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides guidance on when to use the tool ('when you know the provider+dataset'), but does not explicitly mention when not to use it or suggest alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description alone must disclose behavior. It clearly indicates deletion (destructive), but could add detail about behavior if the key does not exist or if deletion is irrevocable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, highly concise, with no wasted words. Every sentence adds value: the first defines the action, the second provides usage guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter deletion tool, the description is mostly complete. However, it does not specify the return behavior (e.g., confirmation or error) or what happens if the key is missing, which is a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already covers 100% of parameters and includes a description for 'key.' The tool description adds no extra semantic detail beyond the schema, meeting the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Delete a previously stored memory by key,' specifying a precise verb and resource. It distinguishes itself from sibling tools like 'remember' and 'recall' by its deletion action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use the tool: 'when context is stale, the task is done, or you want to clear sensitive data.' It also recommends pairing with 'remember' and 'recall,' providing clear context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_seriesAInspect

Fetch a specific time series. Series identified by (provider, dataset, series_code). Include observations to get the actual data points.

ParametersJSON Schema
NameRequiredDescriptionDefault
datasetYesDataset code
providerYesProvider code
series_codeYesSeries code (dot-separated dimensions or named code)
observationsNoInclude data points (default true)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses that 'observations' includes data points, but no mention of side effects or rate limits. Without annotations, the description carries minimal behavioral context beyond fetch action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words; purpose and key detail front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple fetch tool with 4 params and no output schema. Could mention output format but not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so description adds little beyond repeating identifier tuple. The 'observations' clarification is helpful but already in schema with default true.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Fetch a specific time series' with explicit identification fields (provider, dataset, series_code). Distinguishes from sibling 'find_series' which is for searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implicitly suggests using when identifiers are known, but no explicit when-to-use or when-not-to-use compared to similar tools like 'find_series'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_datasetsBInspect

List all datasets available from a given provider.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo1-1000 (default 100)
offsetNo0-based offset
providerYesProvider code (e.g. "ECB", "BLS", "EUROSTAT")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It does not disclose behavioral traits such as side effects, rate limits, or error handling. It only states the basic operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that delivers the essential information without unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema or annotations, the description is too brief. It does not explain return values, pagination behavior, or handling of invalid provider codes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the parameters. The description adds minimal extra meaning beyond 'from a given provider', which is already in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (list), the resource (datasets), and the scope (from a given provider). It distinguishes from sibling tools like list_providers and search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage requires a provider code, but does not provide explicit guidance on when to use this tool versus alternatives, such as search or find_series.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_providersAInspect

List all available statistics providers in DBnomics.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden. It discloses that it lists all available providers, implying a read-only, non-destructive operation. No further behavioral details are needed for such a simple tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, clear sentence with no wasted words. Perfectly concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter, no-output-schema tool with a simple listing purpose, the description fully covers what the agent needs to know.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so the schema coverage is trivially 100%. The description adds no parameter info, but none is needed. Baseline 4 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('list') and the resource ('all available statistics providers in DBnomics'), which distinguishes it from sibling tools that deal with series, datasets, entities, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives, but the purpose is straightforward and siblings are distinct, so usage is implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses rate limits (5 per identifier per day), that it's free and doesn't count against quota, and explains how feedback is processed (digests, roadmap impact). This adds valuable context beyond the input schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise paragraph front-loaded with purpose, followed by usage scenarios and constraints. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully covers all necessary aspects: when to use, how to format, constraints, and impact. No output schema is needed, and the description is self-sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100%, the description enriches parameter understanding by explaining the 'type' enum values and providing guidance on what to include in 'message' (specific tool, error, missing data).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to report bugs, missing features, data gaps, or praise. It distinguishes from sibling tools like ask_pipeworx or discover_tools by focusing on feedback to the development team.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists when to use the tool for each feedback type (bug, feature/data_gap, praise) and provides a specific instruction to not paste end-user prompts, ensuring proper usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses scoping and pairing with other tools, but does not explicitly state read-only behavior or error handling (e.g., what happens if key doesn't exist). Still, retrieval semantics are adequately communicated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two factual sentences, no filler. Action verbs front-loaded, every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description covers the tool's role in the lifecycle, scoping, and use cases. Missing details on return format or error responses, but adequate for a simple retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The parameter 'key' is fully described in the schema with 100% coverage. The description reinforces its purpose (omit to list all keys) but adds no significant new meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: retrieve a saved value or list all keys. It explicitly distinguishes from siblings like 'remember' and 'forget' by naming them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies when to use the tool (look up stored context) and provides clear alternatives: 'pair with remember to save, forget to delete.' It also mentions scoping via identifier.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesAInspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully discloses the tool's parallel fan-out behavior to SEC EDGAR, GDELT, and USPTO, and mentions the return structure: structured changes, total_changes count, and pipeworx:// citation URIs. It does not cover rate limits or authentication, but it transparently describes the core operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long and includes bullet-style details without using lists. It is front-loaded with the core question, then adds usage examples and technical details. It could be slightly more structured, but it efficiently conveys all necessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description fully explains what the tool returns: structured changes, total_changes count, and citation URIs. It covers three data sources, parameter formats, and example queries, making it self-contained for an agent to understand its functionality.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds meaningful context beyond the schema: it notes that type currently only supports 'company', provides example values for since (ISO and relative shorthand), and explains that value accepts ticker or CIK. This aids correct invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a clear statement of purpose: 'What's new with a company in the last N days/months?' and provides concrete examples of user queries that trigger this tool. It distinguishes itself from siblings by being the dedicated tool for recent changes, whereas siblings like entity_profile or compare_entities serve different functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly lists common user phrasings that should trigger this tool, such as 'what's happening with X?' and 'brief me on what happened with Microsoft this quarter'. It does not state when not to use it or provide alternatives, but the clear use-case examples make it easy for an agent to decide when to invoke.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses key behaviors: key-value storage scoped by identifier, persistent memory for authenticated users, and 24-hour retention for anonymous sessions. It does not mention size limits or conflicts, but it covers essential behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (4 sentences) and front-loaded with the core purpose. Every sentence adds value without redundancy, making it efficient for an AI agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple key-value model and no output schema, the description fully covers what an agent needs: storage mechanism, scoping, persistence, and pairing with recall/forget. It leaves no significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage for both parameters. The description adds value by providing example key naming conventions ('subject_property', 'target_ticker'), which aids correct usage beyond the schema's generic descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description begins with 'Save data the agent will need to reuse later — across this conversation or across sessions,' which clearly states the tool's purpose. It uses a specific verb ('Save') and resource ('data') and distinguishes it from sibling tools like 'recall' and 'forget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit guidance on when to use the tool: 'Use when you discover something worth carrying forward... so you don't have to look it up again.' It also mentions pairing with 'recall' and 'forget,' though it doesn't explicitly state when not to use it or list alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It details that the tool returns IDs, pipeworx:// citation URIs, and that it is a read operation (implied by 'look up'). It could explicitly state it is read-only or mention any constraints (e.g., no side effects), but the current description is strong.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at 3-4 sentences, with the main action stated first ('Look up...'), followed by usage context, examples, and return info. Every sentence is essential and no word is wasted. The structure is front-loaded and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only 2 parameters (both with schema descriptions) and no output schema, the description fully covers what an agent needs: purpose, when to use, input examples, and output details (IDs and URIs). It is complete for effective invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds substantial meaning beyond the schema. It provides multiple examples for the 'value' parameter (ticker, CIK, name for company; brand or generic for drug) and clarifies the 'type' enum. The examples and context make parameter usage clear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose: 'Look up the canonical/official identifier for a company or drug.' It uses a specific verb ('look up') and resource ('entity'), and clearly differentiates from siblings by focusing on identifier resolution, which is a distinct capability among the server's tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use when a user mentions a name and you need the CIK...' and 'Use this BEFORE calling other tools that need official identifiers.' It also lists specific identifier systems and gives concrete examples, including when not to use (implicitly, when no identifier is needed) and an alternative ('replaces 2–3 lookup calls').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimAInspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It discloses that the tool returns a verdict (confirmed, refuted, etc.), extracted structured form, actual value with citation, and percent delta. It also mentions the data source (SEC EDGAR + XBRL). However, it does not explicitly state that the tool is non-destructive or read-only, though this is implied by its fact-checking nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (three sentences plus a bullet list), well-structured, and front-loaded with the core action. Every sentence adds necessary information without redundancy. The bullet points clearly enumerate the return values.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given only one parameter, no output schema, and no annotations, the description is largely complete. It covers input format, output structure, supported domain, and use case. It could be improved by mentioning edge cases (e.g., claims outside the supported domain) or error handling, but as a single-param tool, it is fairly comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds value by providing concrete examples of valid claims ('Apple's FY2024 revenue was $400 billion') and clarifying that the input is natural-language factual claims. This helps the agent understand the expected format beyond the schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: fact-checking factual claims against authoritative sources. It specifies the domain (company-financial claims for public US companies) and distinguishes itself by noting it replaces 4-6 sequential calls, implying efficiency compared to potential custom tool chaining.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use the tool with example queries like 'Is it true that…?' and 'Verify the claim that…'. It also provides scope constraints (v1 supports company-financial claims via SEC EDGAR + XBRL). This gives clear guidance on appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.