Census Trade
Server Details
Census Trade MCP — US Census Bureau International Trade data
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-census-trade
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 12 of 12 tools scored. Lowest: 3.2/5.
The tool set mixes trade-specific tools (census_*) with generic pipeworx utilities (ask_pipeworx, memory tools). The ask_pipeworx tool is described as a meta-tool that can answer any question, making it ambiguous which tool to use for trade queries. Additionally, compare_entities and resolve_entity deal with companies/drugs, further diluting the trade focus.
All tool names use snake_case, but the prefix scheme is inconsistent: trade tools use 'census_' while 'compare_entities' and 'resolve_entity' lack a prefix, and generic tools use 'ask_pipeworx' or 'pipeworx_feedback'. This creates a mixed naming pattern.
12 tools is a reasonable count, not overwhelming or too sparse. However, the inclusion of generic pipeworx tools makes the set slightly bloated for a server named 'Census Trade'.
Trade data coverage includes exports, imports, balance, and trends, but lacks tools for tariffs, product categories, or historical comparisons. The inclusion of entity resolution and comparison tools for companies and drugs seems out of scope, leaving trade-specific gaps.
Available Tools
12 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It states Pipeworx picks the right tool and fills arguments, which gives insight into behavior. However, it does not disclose limitations, data freshness, or whether the tool has internet access. Could be more transparent about what 'best available data source' means.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is three sentences with examples, front-loading the main purpose. It is concise but includes valuable examples. Could be slightly more structured (e.g., bullet points) but efficient overall.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given only one parameter, no output schema, and no annotations, the description is fairly complete. It explains the tool's purpose, usage, and behavior. However, without annotations, it would benefit from stating if it is read-only or has side effects. The examples enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with a single parameter 'question' described as 'Your question or request in natural language'. The description adds context with examples of questions, which is helpful beyond the schema. Baseline 3 increased to 4 due to examples enriching the parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool answers plain English questions by selecting the best data source, which is distinct from sibling tools that are specific (e.g., census_trade_balance). The verb 'ask' and resource 'Pipeworx' are clear, but it could better distinguish from discover_tools which also provides information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explicitly states to use when you want an answer without browsing tools or learning schemas, and provides examples. However, it does not explicitly mention when NOT to use this tool (e.g., for specific tool actions) or alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
census_exportsAInspect
Search US export data by HS commodity code (e.g., "8471" for computers) and/or country (e.g., "Mexico"). Returns export values, quantities, and commodity details.
| Name | Required | Description | Default |
|---|---|---|---|
| year | Yes | Trade year (e.g., "2024") | |
| limit | No | Maximum number of records to return (default 20) | |
| month | No | Trade month 01-12. Optional — omit for annual data. | |
| hs_code | Yes | HS commodity code at 2, 4, or 6 digit level (e.g., "8471" for computers) | |
| country_code | No | Census country code (e.g., "5700" for China). Optional — omit for all countries. |
Output Schema
| Name | Required | Description |
|---|---|---|
| type | Yes | Trade direction indicator |
| count | Yes | Number of records returned |
| period | Yes | Trade period (year or year-month) |
| hs_code | Yes | HS commodity code queried |
| records | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description indicates the tool is a read operation that returns data from the US Census Bureau. Since there are no annotations (e.g., destructiveHint or readOnlyHint), the description carries the burden but adequately implies non-destructive behavior. However, it does not disclose any limitations, rate limits, or data freshness details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at two sentences, front-loading the core purpose. Each sentence adds value: first states what it does, second lists output types. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the moderate complexity (5 parameters, 2 required) and no output schema, the description sufficiently covers the purpose and output. It lacks mention of return limits or pagination, but for a data retrieval tool with sibling tools, it provides adequate context to select the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description adds little beyond the schema: it mentions 'HS commodity code' and 'country' but does not clarify the meaning of 'limit' or 'month' beyond what the schema already states. No significant extra context is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool gets US export data by HS commodity code and/or country, specifying the returned data types (export values, quantities, commodity details, country names). It distinguishes from sibling tools like 'census_imports' and 'census_trade_balance' by focusing on exports. However, it does not explicitly differentiate from 'census_trade_trends' which may also deal with exports.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving US export data with specific filters (HS code, country, time period) but does not provide explicit guidance on when to use this tool versus alternatives like 'census_imports' or 'census_trade_trends'. No exclusions or when-not-to-use advice is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
census_importsAInspect
Search US import data by HS commodity code (e.g., "8471" for computers) and/or country (e.g., "China"). Returns import values, quantities, and commodity details.
| Name | Required | Description | Default |
|---|---|---|---|
| year | Yes | Trade year (e.g., "2024") | |
| limit | No | Maximum number of records to return (default 20) | |
| month | No | Trade month 01-12 (e.g., "06" for June). Optional — omit for annual data. | |
| hs_code | Yes | HS commodity code at 2, 4, or 6 digit level (e.g., "8471" for computers, "87" for vehicles) | |
| country_code | No | Census country code (e.g., "5700" for China, "2010" for Mexico). Optional — omit for all countries. |
Output Schema
| Name | Required | Description |
|---|---|---|
| type | Yes | Trade direction indicator |
| count | Yes | Number of records returned |
| period | Yes | Trade period (year or year-month) |
| hs_code | Yes | HS commodity code queried |
| records | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It discloses return fields (import values, quantities, commodity details, country names) but does not mention any behavioral traits like rate limits, pagination, or data freshness. The description is accurate but incomplete for full transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is a single sentence that is well-front-loaded with the core action and filters. It efficiently conveys what the tool does without unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters, all described in schema, and no output schema, the description adequately summarizes inputs and outputs. However, could mention optional month vs annual data distinction, which is clear from schema but not description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description does not add new meaning beyond what the schema already provides for parameters; it merely summarizes the tool's purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Get', resource 'US import data', and key filters 'HS commodity code and/or country'. It distinguishes from siblings (e.g., census_exports, census_trade_balance) by specifying 'import data' and mentioning US Census Bureau as source.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for retrieving US import data, but does not explicitly state when to prefer this over census_exports or census_trade_trends. No exclusion criteria or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
census_trade_balanceBInspect
Check US trade balance with a specific country for a given year. Returns net trade value and breakdown by end-use commodity category.
| Name | Required | Description | Default |
|---|---|---|---|
| year | Yes | Trade year (e.g., "2024") | |
| country_code | Yes | Census country code (e.g., "5700" for China, "2010" for Mexico) |
Output Schema
| Name | Required | Description |
|---|---|---|
| year | Yes | Trade year |
| country | Yes | Country name |
| country_code | Yes | Census country code |
| total_exports_usd | Yes | Total exports in USD |
| total_imports_usd | Yes | Total imports in USD |
| trade_balance_usd | Yes | Net trade balance (exports minus imports) |
| deficit_or_surplus | Yes | Trade balance classification |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description bears full burden. It discloses the tool aggregates using end-use commodity categories, but does not mention data freshness, potential errors, or return format. Adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, efficient and front-loaded with the core purpose. No fluff, but could mention that year is string format if not obvious.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and moderate complexity, the description is adequate but incomplete: does not specify if trade balance is in USD, if the result is a single number or a breakdown, or if data is available for all years. An output schema would help.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds no additional parameter context beyond what the schema provides (e.g., no examples of country codes beyond those in schema). Neutral.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it gets the US trade balance with a specific country for a given year, using end-use commodity categories. It distinguishes from siblings like census_exports and census_imports by focusing on the balance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for retrieving trade balance data but does not explicitly state when to use this tool vs alternatives. Siblings include exports, imports, and trends, but no guidance on selection is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
census_trade_trendsBInspect
Get monthly US trade trends for a commodity and/or country over time. Returns month-by-month values to identify seasonal patterns and shifts.
| Name | Required | Description | Default |
|---|---|---|---|
| hs_code | No | HS commodity code. Optional — omit for aggregate trade. | |
| end_year | Yes | End year (e.g., "2024") | |
| start_year | Yes | Start year (e.g., "2022") | |
| country_code | No | Census country code. Optional — omit for all countries. |
Output Schema
| Name | Required | Description |
|---|---|---|
| months | Yes | Number of monthly data points |
| trends | Yes | |
| hs_code | Yes | HS commodity code or 'all' for aggregate |
| end_year | Yes | End year of trend range |
| start_year | Yes | Start year of trend range |
| country_code | Yes | Census country code or 'all' for all countries |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It mentions it gets trends and shows changes but does not state whether it is read-only, if there are rate limits, or what the return format looks like. This is insufficient for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Efficiently conveys purpose and key optional parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description does not explain return values or behavior (e.g., whether it returns aggregated or raw data, pagination). It is incomplete for a tool with 4 parameters and no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds that hs_code and country_code are optional, but the schema already states that. No additional semantic meaning beyond the schema is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets monthly US trade trends over a period and shows how trade values change month by month. It mentions optional filtering by commodity and/or country, distinguishing it from sibling tools like census_exports and census_imports, though not explicitly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for monthly trend analysis but does not specify when to use this versus other trade tools (e.g., census_trade_balance). No explicit when-not or alternative guidance is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 entities side by side in one call. type="company": revenue, net income, cash, long-term debt from SEC EDGAR. type="drug": adverse-event report count, FDA approval count, active trial count. Returns paired data + pipeworx:// resource URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description must fully disclose behavior. It states it 'returns paired data + pipeworx:// resource URIs' and explains data sources (SEC EDGAR for companies, FDA-related for drugs). It does not mention permissions, rate limits, or any side effects, but as a read-only comparison tool, this is acceptable. The description is adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences, each dense with information. It front-loads the core purpose and then details specifics per type. No redundant or filler content. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the return format (paired data + URIs) and what metrics are included for each entity type. It covers both use cases. It could optionally mention data freshness or limitations, but overall it provides sufficient context for an agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents parameters. The description adds value by explaining the meaning of each 'type' value and providing example formats for 'values' (tickers/CIKs for companies, drug names). This goes beyond the schema's minimal descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'compare' and specifies the resource (2–5 entities of type company or drug). It lists exact data points for each type (revenue, net income, etc. for companies; adverse-event counts, FDA approvals, trials for drugs). It distinguishes from siblings by noting it replaces 8–15 sequential agent calls, implying it is more efficient than individual lookups.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context: for comparing multiple entities efficiently. The phrase 'Replaces 8–15 sequential agent calls' suggests it should be used instead of multiple calls to other tools. It does not explicitly state when not to use, but the purpose is clear enough for an agent to decide. No explicit alternative tools are named, but the sibling list provides context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that the tool searches a catalog and returns tool names and descriptions, which is the core behavior. It also hints at the scope ('500+ tools'). However, it does not mention any rate limits, auth requirements, or side effects. Since this is a search tool, destructive behavior is not expected, but transparency is still good.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the key purpose, and contains no wasted words. It is well-structured for an agent to quickly understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is a search/discovery tool with no output schema, the description explains what it returns ('most relevant tools with names and descriptions') and when to use it. It is complete enough for an agent to invoke correctly. Lacks info on whether results are ranked, but that is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both 'query' and 'limit' parameters. The description adds a brief note about default and max for limit but does not add significant meaning beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search' and the resource 'Pipeworx tool catalog'. It specifies the purpose: finding relevant tools by describing what you need, and distinguishes itself by telling the agent to call this FIRST when many tools are available.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task', providing a clear directive on when to use this tool. It implies that this tool is for discovery before invoking other tools, distinguishing it from sibling tools that perform specific operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetBInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It states the action but does not disclose side effects, irreversibility, or authorization needs. For a deletion tool, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no redundancy. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no annotations, the description should provide more behavioral detail. It is too minimal for a deletion tool that could have irreversible effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters, so baseline is 3. Description does not add meaning beyond schema; it merely restates 'key' as 'Memory key to delete'. No additional value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses a clear verb ('Delete') and resource ('stored memory by key'), immediately distinguishing it from sibling tools like 'recall' (retrieve) and 'remember' (store).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives; no mention of prerequisites or safety considerations. Description is purely functional without context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Send feedback to the Pipeworx team. Use for bug reports, feature requests, missing data, or praise. Describe what you tried in terms of Pipeworx tools/data — do not include the end-user's prompt verbatim. Rate-limited to 5 messages per identifier per day. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the rate limit and the instruction to avoid including prompts. However, it does not specify the outcome of sending feedback (e.g., whether a response is expected), if the action is synchronous or asynchronous, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise: three short sentences. The first sentence states the core purpose. The second lists use cases. The third gives critical constraints and rate limit. No unnecessary words, front-loaded with the most important info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple feedback tool with no output schema, the description covers purpose, usage scenarios, parameter guidelines, and rate limits. While it does not describe the return value (e.g., success confirmation), this is acceptable given the tool's straightforward nature. The nested object in schema is not elaborated, but it's optional and self-explanatory.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by mapping the 'type' enum to real-world use cases (bug, feature, data_gap, praise) and advising on what to include in the 'message' (describe tools/data tried, avoid prompt verbatim). This enhances understanding beyond the raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Send feedback to the Pipeworx team.' It lists specific use cases (bug reports, feature requests, missing data, praise) and distinguishes itself from sibling tools (which focus on data querying and recall).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage guidance: use for bug reports, features, missing data, or praise. It includes specific instructions (describe what you tried in Pipeworx tools/data, do not include end-user prompt verbatim) and mentions rate limits (5 per day). However, it does not explicitly contrast against alternatives or mention when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool retrieves previously stored memories and that omitting the key lists all memories. However, it doesn't mention potential side effects (none likely), performance implications, or whether retrieval is read-only. For a simple retrieval tool, this is adequate but not detailed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the core action and then adding an alternative use case. It is concise but could be slightly more structured by separating retrieval and listing into distinct usage notes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single optional parameter, no output schema), the description is largely complete. It explains both retrieval modes. However, it doesn't describe the output format or what happens if the key doesn't exist, which could be useful for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by explaining the behavior when the parameter is omitted ('list all stored memories'), which is not obvious from the schema alone. This goes beyond the schema definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving a memory by key or listing all memories when key is omitted. It distinguishes itself from 'remember' (store) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance: use when you need to retrieve previously saved context, and how to list all keys by omitting the parameter. However, it doesn't contrast with siblings like 'forget' or 'remember' in terms of when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses persistence behavior: 'Authenticated users get persistent memory; anonymous sessions last 24 hours'. No annotations provided, so description carries full burden, which it meets well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each adding distinct value: purpose, use cases, persistence behavior. No wasted words. Front-loaded with main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple store tool with no output schema, description covers purpose, usage, and important behavioral detail (persistence). Could mention that keys are case-sensitive or naming conventions, but not necessary for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds example values for key ('subject_property', etc.) and clarifies value accepts any text, but does not add meaning beyond schema's own descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'Store a key-value pair in your session memory', with clear verb 'store' and resource 'session memory'. Differentiates from sibling 'recall' (retrieve) and 'forget' (delete) by its write nature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States use cases: 'save intermediate findings, user preferences, or context across tool calls'. Does not explicitly say when NOT to use or list alternatives, but siblings 'forget' and 'recall' cover complementary operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Resolve an entity to canonical IDs across Pipeworx data sources in a single call. Supports type="company" (ticker/CIK/name → SEC EDGAR identity) and type="drug" (brand or generic name → RxCUI + ingredient + brand). Returns IDs and pipeworx:// resource URIs for stable citation. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses the v1 limitation to 'company' type and the output format, but does not discuss error behavior, rate limits, or side effects beyond what is described.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three focused sentences with no redundancy. Front-loaded with the core action and followed by specific details. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 parameters and no output schema, the description explains return fields and references alternative approaches. Missing error handling or constraints, but adequate for the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, and the description adds value by explaining the accepted formats for 'value' and the v1 restriction on 'type', reinforcing the enum meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it resolves an entity to canonical IDs across Pipeworx data sources, provides examples of input formats, and frames it as a replacement for 2-3 lookup calls, distinguishing it from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies efficiency by noting it replaces multiple calls, but does not explicitly state when not to use or list alternatives. The sibling tools offer search or recall functions, but no direct comparison is made.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!