Skip to main content
Glama

Server Details

Treasury Fiscal MCP — US Treasury Fiscal Data API

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-treasury-fiscal
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 9 of 9 tools scored.

Server CoherenceB
Disambiguation3/5

There is overlap between the general-purpose ask_pipeworx and the specific treasury_* tools. The ask_pipeworx tool is described as a natural language interface that can handle treasury queries, which could conflict with the dedicated treasury tools. However, most tools have distinct purposes (memory vs. data retrieval, different treasury datasets).

Naming Consistency3/5

The treasury tools follow a consistent 'treasury_' prefix pattern, but the memory tools use different verbs (remember, recall, forget) and ask_pipeworx and discover_tools break the pattern. The naming style is a mix of lowercase snake_case and plain words.

Tool Count4/5

9 tools is a reasonable number for a server that covers both a general AI query tool and specific Treasury data endpoints. The count feels well-scoped, not excessive or too few.

Completeness3/5

The treasury tools cover customs revenue, debt, exchange rates, and receipts, but are missing other Treasury data like bond yields, T-bill rates, or detailed deficit/spending breakdowns. The memory and tool discovery features add generality but are not specific to the domain.

Available Tools

9 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description reveals that Pipeworx selects the appropriate tool and fills arguments automatically, which is a key behavioral trait. It also indicates the return is the answer. Without annotations, this disclosure is valuable. However, it does not specify limitations (e.g., data freshness, latency, or if the question is out of scope).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively concise, using three sentences to convey purpose, behavior, and usage. The inclusion of examples is helpful. It could be slightly more structured, but the key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (one required string parameter) and no output schema, the description provides sufficient context for an agent to understand the tool's purpose and usage. It could mention the scope of data sources or fallback behavior for completeness, but the current description is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds significant meaning by explaining that the parameter 'question' should be a natural language description of what the user needs, with examples provided. This goes beyond the schema's simple 'Your question or request in natural language'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: answering questions in plain English using the best available data source. It distinguishes itself from sibling tools by acting as a unified query interface, contrasting with more specific tools like treasury_customs_revenue or discover_tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use this tool: whenever you have a natural language question or request. It advises against manually browsing tools or learning schemas. However, it does not explicitly state when not to use it, such as when a sibling tool would be more appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavior. It states that it searches and returns results, but does not mention any side effects or limitations (e.g., no filtering beyond query, no state changes). The description is accurate but could be more detailed about the search mechanism.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loaded with the action and purpose. It is concise but includes a useful usage hint. One minor improvement could be omitting the example query detail if it's already in the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (search with query and optional limit), the description is nearly complete. It explains the purpose, when to call it, and what it returns. It could mention the default limit and max explicitly, but those are in the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents both parameters. The description does not add extra meaning beyond what the schema provides (e.g., query examples are in schema description). Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Search' and the resource 'Pipeworx tool catalog', and specifies that it returns tools with names and descriptions. It distinguishes itself from siblings by being the tool for discovering tools among 500+ options.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises 'Call this FIRST' when many tools are available, providing clear context for when to use this tool versus alternatives. It also gives an example of the query format.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It clearly states that the tool deletes (destructive action), which is transparent. However, it does not mention any additional behavioral traits like irreversibility, authorization requirements, or side effects. For a destructive operation, this is adequate but could be more explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with 5 words, perfectly concise with no wasted information. It is front-loaded and every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is simple (one required parameter, no output schema, no nested objects), the description is largely complete. It would benefit from noting that deletion is permanent, but the context is sufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the parameter. The description 'Memory key to delete' is consistent with the schema description. However, the description adds no additional meaning beyond what the schema provides, earning a baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Delete a stored memory by key' is a specific verb+resource combination that clearly states what the tool does. It distinguishes this tool from siblings like 'remember' (create) and 'recall' (retrieve) by focusing on deletion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you need to delete a specific memory, but it does not explicitly state when not to use this tool or mention alternatives. Given the sibling 'recall' for retrieval and 'remember' for storage, the context is implied but not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It clearly states the tool is for retrieval only (non-destructive) and implies persistence across sessions, but does not mention any rate limits or potential side effects. It is accurate and adds value beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Front-loaded with the core action ('Retrieve... or list...'), then adds context. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a simple tool with one optional parameter and no output schema, the description is complete enough. It explains both modes of use and the purpose. No gaps are evident for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the parameter well. The description adds context by explaining the behavior when key is omitted (list all), which is a useful semantic beyond the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('retrieve', 'list') and resources ('memory', 'memories'), clearly distinguishing between single key retrieval and listing all keys. It differentiates itself from sibling tools like 'remember' and 'forget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to omit the key ('list all stored memories') and provides context for use ('retrieve context you saved earlier'), but does not explicitly say when not to use it or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that memory persistence depends on authentication status (authenticated vs. anonymous 24-hour session). This is useful beyond the basic 'store' action. However, it doesn't mention any side effects or limits on key-value size or count.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences that efficiently cover purpose, use cases, and behavioral nuance. No wasted words; front-loaded with action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and only two simple parameters, the description is sufficient. It explains the tool's role in the session memory workflow and its persistence behavior. Minor missing detail: no mention of overwrite behavior for duplicate keys.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents both parameters with examples. The description adds no further parameter-specific details beyond the schema, so the baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores a key-value pair in session memory, which is specific and actionable. It distinguishes itself from siblings like recall and forget by explicitly describing the save operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear use cases ('save intermediate findings, user preferences, or context across tool calls') and explains persistence differences between authenticated users and anonymous sessions. However, it does not explicitly say when not to use it or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

treasury_customs_revenueAInspect

Track monthly US customs duty revenue. Returns monthly collection amounts to analyze tariff impact trends.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of monthly records to return (default 12 for 1 year)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It indicates a read operation ('Get') but lacks details about data freshness, pagination, or potential delays. Annotations could have provided more safety info.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, each adding value. No fluff, front-loaded with the main purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and a simple tool, the description is adequate but could mention the data range or return format for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (only 'limit' parameter). The description adds context by noting 'default 12 for 1 year,' which clarifies the default behavior beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('monthly US customs duty revenue collections from the Treasury'), clearly distinguishing it from siblings like treasury_debt or treasury_receipts by specifying the exact revenue type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says 'Useful for tracking tariff revenue impact over time,' which implies a usage context but does not explicitly state when not to use it or compare with alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

treasury_debtAInspect

Check current US national debt with historical data points. Returns total public debt outstanding over time.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of records to return (default 10)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description mentions it returns historical data points, but does not elaborate on other behaviors like rate limits, data freshness, or pagination. Since annotations are empty, the description carries the full burden, but it provides some useful context beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences) and front-loaded with the core purpose. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has a simple schema and no output schema, the description adequately explains what it does. It could mention the format of historical data (e.g., date, amount), but is still sufficient for an agent to select and invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'limit' parameter. The description does not add extra meaning beyond 'returns historical data points', which is not directly about the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns the current US national debt with historical data points. The verb 'Get' and resource 'US national debt' are specific. It distinguishes from sibling tools like treasury_customs_revenue, treasury_exchange_rates, and treasury_receipts by targeting debt data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates this is for retrieving debt data, and the context of sibling tools suggests alternatives for other treasury data. However, there is no explicit 'when to use' or 'when not to use' guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

treasury_exchange_ratesAInspect

Get official US Treasury exchange rates for any currency (e.g., 'EUR', 'GBP', 'JPY'). Returns rates used for government conversions.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of records to return (default 12)
countryYesCountry name (e.g., "China", "Mexico", "Japan", "Canada")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must carry the burden. It describes the data source (US government official rates) but does not disclose any behavioral traits like update frequency, data freshness, or whether rates are historical or current. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded with the action and resource, followed by a clarifying statement. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema and no output schema, the description is mostly complete. It explains the data source and scope. Missing details like default limit behavior or response format could be added, but the tool is straightforward.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by explaining that rates are 'official' and used by the US government, but it does not add detail about the parameters beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Get), resource (Treasury exchange rates), and scope (for a specific country). It distinguishes itself from sibling tools like treasury_customs_revenue and treasury_debt by specifying currency conversion rates, which are unique to this tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context on what the tool does but does not explicitly say when to use it vs. alternatives. No exclusions or comparisons to siblings are given, leaving the agent to infer based on the resource name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

treasury_receiptsAInspect

Get US government receipts by source: individual income tax, corporate tax, excise taxes, customs duties, and more.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of records to return (default 12)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description must disclose behavior. It states that data is broken down by source, which is helpful, but lacks details on pagination, date range, or response format. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, perfectly sized, front-loaded with key action and resource, no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one param and no output schema, the description is mostly adequate. However, it lacks context on output format or data source recency, which could be useful for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the only parameter 'limit' is described in the schema. The description adds no extra param info, but with full coverage, a score of 4 is justified as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Get') and clearly states the resource ('total US government receipts') and the breakdown by source categories (individual income tax, corporate tax, etc.), which effectively distinguishes it from siblings like treasury_customs_revenue.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings, such as treasury_debt or treasury_customs_revenue. The description implies usage for receipts data but does not provide context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.