Skip to main content
Glama

Server Details

Census Trade MCP — US Census Bureau International Trade data

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-census-trade
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 12 of 12 tools scored. Lowest: 3.2/5.

Server CoherenceB
Disambiguation2/5

The tool set mixes trade-specific tools (census_*) with generic pipeworx utilities (ask_pipeworx, memory tools). The ask_pipeworx tool is described as a meta-tool that can answer any question, making it ambiguous which tool to use for trade queries. Additionally, compare_entities and resolve_entity deal with companies/drugs, further diluting the trade focus.

Naming Consistency3/5

All tool names use snake_case, but the prefix scheme is inconsistent: trade tools use 'census_' while 'compare_entities' and 'resolve_entity' lack a prefix, and generic tools use 'ask_pipeworx' or 'pipeworx_feedback'. This creates a mixed naming pattern.

Tool Count4/5

12 tools is a reasonable count, not overwhelming or too sparse. However, the inclusion of generic pipeworx tools makes the set slightly bloated for a server named 'Census Trade'.

Completeness3/5

Trade data coverage includes exports, imports, balance, and trends, but lacks tools for tariffs, product categories, or historical comparisons. The inclusion of entity resolution and comparison tools for companies and drugs seems out of scope, leaving trade-specific gaps.

Available Tools

12 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It states Pipeworx picks the right tool and fills arguments, which gives insight into behavior. However, it does not disclose limitations, data freshness, or whether the tool has internet access. Could be more transparent about what 'best available data source' means.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is three sentences with examples, front-loading the main purpose. It is concise but includes valuable examples. Could be slightly more structured (e.g., bullet points) but efficient overall.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given only one parameter, no output schema, and no annotations, the description is fairly complete. It explains the tool's purpose, usage, and behavior. However, without annotations, it would benefit from stating if it is read-only or has side effects. The examples enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with a single parameter 'question' described as 'Your question or request in natural language'. The description adds context with examples of questions, which is helpful beyond the schema. Baseline 3 increased to 4 due to examples enriching the parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool answers plain English questions by selecting the best data source, which is distinct from sibling tools that are specific (e.g., census_trade_balance). The verb 'ask' and resource 'Pipeworx' are clear, but it could better distinguish from discover_tools which also provides information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description explicitly states to use when you want an answer without browsing tools or learning schemas, and provides examples. However, it does not explicitly mention when NOT to use this tool (e.g., for specific tool actions) or alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

census_exportsAInspect

Search US export data by HS commodity code (e.g., "8471" for computers) and/or country (e.g., "Mexico"). Returns export values, quantities, and commodity details.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearYesTrade year (e.g., "2024")
limitNoMaximum number of records to return (default 20)
monthNoTrade month 01-12. Optional — omit for annual data.
hs_codeYesHS commodity code at 2, 4, or 6 digit level (e.g., "8471" for computers)
country_codeNoCensus country code (e.g., "5700" for China). Optional — omit for all countries.

Output Schema

ParametersJSON Schema
NameRequiredDescription
typeYesTrade direction indicator
countYesNumber of records returned
periodYesTrade period (year or year-month)
hs_codeYesHS commodity code queried
recordsYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates the tool is a read operation that returns data from the US Census Bureau. Since there are no annotations (e.g., destructiveHint or readOnlyHint), the description carries the burden but adequately implies non-destructive behavior. However, it does not disclose any limitations, rate limits, or data freshness details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at two sentences, front-loading the core purpose. Each sentence adds value: first states what it does, second lists output types. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the moderate complexity (5 parameters, 2 required) and no output schema, the description sufficiently covers the purpose and output. It lacks mention of return limits or pagination, but for a data retrieval tool with sibling tools, it provides adequate context to select the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description adds little beyond the schema: it mentions 'HS commodity code' and 'country' but does not clarify the meaning of 'limit' or 'month' beyond what the schema already states. No significant extra context is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool gets US export data by HS commodity code and/or country, specifying the returned data types (export values, quantities, commodity details, country names). It distinguishes from sibling tools like 'census_imports' and 'census_trade_balance' by focusing on exports. However, it does not explicitly differentiate from 'census_trade_trends' which may also deal with exports.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving US export data with specific filters (HS code, country, time period) but does not provide explicit guidance on when to use this tool versus alternatives like 'census_imports' or 'census_trade_trends'. No exclusions or when-not-to-use advice is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

census_importsAInspect

Search US import data by HS commodity code (e.g., "8471" for computers) and/or country (e.g., "China"). Returns import values, quantities, and commodity details.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearYesTrade year (e.g., "2024")
limitNoMaximum number of records to return (default 20)
monthNoTrade month 01-12 (e.g., "06" for June). Optional — omit for annual data.
hs_codeYesHS commodity code at 2, 4, or 6 digit level (e.g., "8471" for computers, "87" for vehicles)
country_codeNoCensus country code (e.g., "5700" for China, "2010" for Mexico). Optional — omit for all countries.

Output Schema

ParametersJSON Schema
NameRequiredDescription
typeYesTrade direction indicator
countYesNumber of records returned
periodYesTrade period (year or year-month)
hs_codeYesHS commodity code queried
recordsYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries burden. It discloses return fields (import values, quantities, commodity details, country names) but does not mention any behavioral traits like rate limits, pagination, or data freshness. The description is accurate but incomplete for full transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is a single sentence that is well-front-loaded with the core action and filters. It efficiently conveys what the tool does without unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters, all described in schema, and no output schema, the description adequately summarizes inputs and outputs. However, could mention optional month vs annual data distinction, which is clear from schema but not description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description does not add new meaning beyond what the schema already provides for parameters; it merely summarizes the tool's purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Get', resource 'US import data', and key filters 'HS commodity code and/or country'. It distinguishes from siblings (e.g., census_exports, census_trade_balance) by specifying 'import data' and mentioning US Census Bureau as source.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for retrieving US import data, but does not explicitly state when to prefer this over census_exports or census_trade_trends. No exclusion criteria or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

census_trade_balanceBInspect

Check US trade balance with a specific country for a given year. Returns net trade value and breakdown by end-use commodity category.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearYesTrade year (e.g., "2024")
country_codeYesCensus country code (e.g., "5700" for China, "2010" for Mexico)

Output Schema

ParametersJSON Schema
NameRequiredDescription
yearYesTrade year
countryYesCountry name
country_codeYesCensus country code
total_exports_usdYesTotal exports in USD
total_imports_usdYesTotal imports in USD
trade_balance_usdYesNet trade balance (exports minus imports)
deficit_or_surplusYesTrade balance classification
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description bears full burden. It discloses the tool aggregates using end-use commodity categories, but does not mention data freshness, potential errors, or return format. Adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, efficient and front-loaded with the core purpose. No fluff, but could mention that year is string format if not obvious.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and moderate complexity, the description is adequate but incomplete: does not specify if trade balance is in USD, if the result is a single number or a breakdown, or if data is available for all years. An output schema would help.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds no additional parameter context beyond what the schema provides (e.g., no examples of country codes beyond those in schema). Neutral.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it gets the US trade balance with a specific country for a given year, using end-use commodity categories. It distinguishes from siblings like census_exports and census_imports by focusing on the balance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for retrieving trade balance data but does not explicitly state when to use this tool vs alternatives. Siblings include exports, imports, and trends, but no guidance on selection is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 entities side by side in one call. type="company": revenue, net income, cash, long-term debt from SEC EDGAR. type="drug": adverse-event report count, FDA approval count, active trial count. Returns paired data + pipeworx:// resource URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must fully disclose behavior. It states it 'returns paired data + pipeworx:// resource URIs' and explains data sources (SEC EDGAR for companies, FDA-related for drugs). It does not mention permissions, rate limits, or any side effects, but as a read-only comparison tool, this is acceptable. The description is adequate but not exhaustive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four sentences, each dense with information. It front-loads the core purpose and then details specifics per type. No redundant or filler content. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains the return format (paired data + URIs) and what metrics are included for each entity type. It covers both use cases. It could optionally mention data freshness or limitations, but overall it provides sufficient context for an agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents parameters. The description adds value by explaining the meaning of each 'type' value and providing example formats for 'values' (tickers/CIKs for companies, drug names). This goes beyond the schema's minimal descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'compare' and specifies the resource (2–5 entities of type company or drug). It lists exact data points for each type (revenue, net income, etc. for companies; adverse-event counts, FDA approvals, trials for drugs). It distinguishes from siblings by noting it replaces 8–15 sequential agent calls, implying it is more efficient than individual lookups.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context: for comparing multiple entities efficiently. The phrase 'Replaces 8–15 sequential agent calls' suggests it should be used instead of multiple calls to other tools. It does not explicitly state when not to use, but the purpose is clear enough for an agent to decide. No explicit alternative tools are named, but the sibling list provides context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that the tool searches a catalog and returns tool names and descriptions, which is the core behavior. It also hints at the scope ('500+ tools'). However, it does not mention any rate limits, auth requirements, or side effects. Since this is a search tool, destructive behavior is not expected, but transparency is still good.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the key purpose, and contains no wasted words. It is well-structured for an agent to quickly understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is a search/discovery tool with no output schema, the description explains what it returns ('most relevant tools with names and descriptions') and when to use it. It is complete enough for an agent to invoke correctly. Lacks info on whether results are ranked, but that is a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both 'query' and 'limit' parameters. The description adds a brief note about default and max for limit but does not add significant meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Search' and the resource 'Pipeworx tool catalog'. It specifies the purpose: finding relevant tools by describing what you need, and distinguishes itself by telling the agent to call this FIRST when many tools are available.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task', providing a clear directive on when to use this tool. It implies that this tool is for discovery before invoking other tools, distinguishing it from sibling tools that perform specific operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetBInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It states the action but does not disclose side effects, irreversibility, or authorization needs. For a deletion tool, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundancy. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and no annotations, the description should provide more behavioral detail. It is too minimal for a deletion tool that could have irreversible effects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters, so baseline is 3. Description does not add meaning beyond schema; it merely restates 'key' as 'Memory key to delete'. No additional value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses a clear verb ('Delete') and resource ('stored memory by key'), immediately distinguishing it from sibling tools like 'recall' (retrieve) and 'remember' (store).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives; no mention of prerequisites or safety considerations. Description is purely functional without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Send feedback to the Pipeworx team. Use for bug reports, feature requests, missing data, or praise. Describe what you tried in terms of Pipeworx tools/data — do not include the end-user's prompt verbatim. Rate-limited to 5 messages per identifier per day. Free.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the rate limit and the instruction to avoid including prompts. However, it does not specify the outcome of sending feedback (e.g., whether a response is expected), if the action is synchronous or asynchronous, or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise: three short sentences. The first sentence states the core purpose. The second lists use cases. The third gives critical constraints and rate limit. No unnecessary words, front-loaded with the most important info.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple feedback tool with no output schema, the description covers purpose, usage scenarios, parameter guidelines, and rate limits. While it does not describe the return value (e.g., success confirmation), this is acceptable given the tool's straightforward nature. The nested object in schema is not elaborated, but it's optional and self-explanatory.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by mapping the 'type' enum to real-world use cases (bug, feature, data_gap, praise) and advising on what to include in the 'message' (describe tools/data tried, avoid prompt verbatim). This enhances understanding beyond the raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Send feedback to the Pipeworx team.' It lists specific use cases (bug reports, feature requests, missing data, praise) and distinguishes itself from sibling tools (which focus on data querying and recall).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage guidance: use for bug reports, features, missing data, or praise. It includes specific instructions (describe what you tried in Pipeworx tools/data, do not include end-user prompt verbatim) and mentions rate limits (5 per day). However, it does not explicitly contrast against alternatives or mention when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool retrieves previously stored memories and that omitting the key lists all memories. However, it doesn't mention potential side effects (none likely), performance implications, or whether retrieval is read-only. For a simple retrieval tool, this is adequate but not detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the core action and then adding an alternative use case. It is concise but could be slightly more structured by separating retrieval and listing into distinct usage notes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single optional parameter, no output schema), the description is largely complete. It explains both retrieval modes. However, it doesn't describe the output format or what happens if the key doesn't exist, which could be useful for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds value by explaining the behavior when the parameter is omitted ('list all stored memories'), which is not obvious from the schema alone. This goes beyond the schema definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving a memory by key or listing all memories when key is omitted. It distinguishes itself from 'remember' (store) and 'forget' (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: use when you need to retrieve previously saved context, and how to list all keys by omitting the parameter. However, it doesn't contrast with siblings like 'forget' or 'remember' in terms of when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses persistence behavior: 'Authenticated users get persistent memory; anonymous sessions last 24 hours'. No annotations provided, so description carries full burden, which it meets well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each adding distinct value: purpose, use cases, persistence behavior. No wasted words. Front-loaded with main action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple store tool with no output schema, description covers purpose, usage, and important behavioral detail (persistence). Could mention that keys are case-sensitive or naming conventions, but not necessary for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds example values for key ('subject_property', etc.) and clarifies value accepts any text, but does not add meaning beyond schema's own descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states 'Store a key-value pair in your session memory', with clear verb 'store' and resource 'session memory'. Differentiates from sibling 'recall' (retrieve) and 'forget' (delete) by its write nature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States use cases: 'save intermediate findings, user preferences, or context across tool calls'. Does not explicitly say when NOT to use or list alternatives, but siblings 'forget' and 'recall' cover complementary operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Resolve an entity to canonical IDs across Pipeworx data sources in a single call. Supports type="company" (ticker/CIK/name → SEC EDGAR identity) and type="drug" (brand or generic name → RxCUI + ingredient + brand). Returns IDs and pipeworx:// resource URIs for stable citation. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses the v1 limitation to 'company' type and the output format, but does not discuss error behavior, rate limits, or side effects beyond what is described.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three focused sentences with no redundancy. Front-loaded with the core action and followed by specific details. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 parameters and no output schema, the description explains return fields and references alternative approaches. Missing error handling or constraints, but adequate for the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, and the description adds value by explaining the accepted formats for 'value' and the v1 restriction on 'type', reinforcing the enum meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it resolves an entity to canonical IDs across Pipeworx data sources, provides examples of input formats, and frames it as a replacement for 2-3 lookup calls, distinguishing it from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies efficiency by noting it replaces multiple calls, but does not explicitly state when not to use or list alternatives. The sibling tools offer search or recall functions, but no direct comparison is made.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.