Skip to main content
Glama

Server Details

USAspending MCP — Federal spending data from USAspending.gov API

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-usaspending
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.8/5 across 10 of 10 tools scored. Lowest: 2.9/5.

Server CoherenceB
Disambiguation3/5

Tools are generally distinct, but ask_pipeworx overlaps with the usa_* tools since it can answer spending questions, potentially causing confusion. discover_tools is useful for tool discovery but its purpose is clear.

Naming Consistency3/5

Memory tools (remember, recall, forget) follow a simple verb pattern, while USA spending tools use usa_ prefix with descriptive names. ask_pipeworx and discover_tools break the pattern, mixing plain English and technical prefixes.

Tool Count4/5

10 tools is appropriate for a server combining memory functions and federal spending queries. The count is reasonable and not overwhelming.

Completeness3/5

USA spending tools cover search, recipient profile, agency breakdown, category breakdown, and trends, which is fairly complete for common queries. However, missing features like saving or comparing searches are gaps. Memory tools are complete for key-value storage.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It explains that the tool internally selects the right tool and fills arguments, which is useful behavioral context. However, it does not disclose limitations, potential errors, or what happens if no suitable data source is found, leaving some uncertainty.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at two sentences plus examples. It front-loads the core purpose and provides concrete examples, with no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter and no output schema, the description adequately explains what the tool does and how to use it. The examples cover different types of queries. A minor gap is that it doesn't mention that the answer may be sourced from a specific sibling tool, but this is implied by 'best available data source'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single parameter 'question'. The description adds meaning by explaining that the question should be in natural language and providing examples, but the parameter description in the schema already covers the basic idea. No additional constraints or format details are needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool accepts a natural language question and returns an answer from the best available data source. It distinguishes itself from sibling tools by acting as a general-purpose query interface rather than a specific data lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool: when you have a question in plain English and want the system to pick the right underlying tool. It provides examples and contrasts with browsing tools or learning schemas, implying not to use this when you need to manually specify a tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the burden. It states that the tool returns the most relevant tools with names and descriptions, which is transparent about the output. However, it does not mention any potential side effects, rate limits, or limitations, but given the read-only search nature, the description is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences: the first states the action, the second describes the output, and the third gives a usage directive. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is complete. It explains what the tool does, what it returns, and when to use it. No additional information is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description mentions the query parameter by example but does not add extra meaning beyond the schema's description. It does not elaborate on the limit parameter's behavior beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Search') and resource ('Pipeworx tool catalog'), and specifies the use case: discovering relevant tools when 500+ are available. It distinguishes itself from siblings by emphasizing it returns tool names and descriptions for selection, whereas siblings like ask_pipeworx or usa_award_search serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to call this FIRST when 500+ tools are available, providing clear usage context. It implies this tool is for discovery, not for direct task execution, which differentiates it from sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description must disclose behavioral traits. It states deletion but doesn't mention whether the operation is irreversible, if confirmation is needed, or any side effects. The behavior is implied but not fully transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence that is concise and front-loaded with the action. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (1 param, no output schema), the description is adequate but lacks details about return value or confirmation. It could mention that the operation is permanent or provide success/failure indicators.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with only one required parameter 'key', and the description mentions 'by key', aligning with the schema. No additional semantic detail is added beyond the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and the target resource ('a stored memory'), and specifies the key parameter. It distinguishes from sibling tools like 'remember' (create) and 'recall' (retrieve), though it doesn't explicitly name them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'recall' or 'remember'. There is no mention of prerequisites (e.g., key existence) or error handling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description does not contradict the lack of annotations. It discloses the core behavior (retrieve/list) but does not add details beyond what the description and schema already imply. Since there are no annotations, the description carries the burden, but it is straightforward and adds minimal behavioral context beyond the obvious.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences) and front-loaded with the primary action. Every word serves a purpose, no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is simple (one optional parameter, no output schema, no nested objects), the description is nearly complete. It explains both retrieval and listing. However, it could mention that the return format is a string or object, but the lack of output schema makes this a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the parameter semantics are fully documented in the schema. The description adds no additional meaning beyond restating what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieve a memory by key or list all stored memories. It uses specific verbs ('retrieve', 'list') and the resource ('memory'), distinguishing it from sibling tools like 'remember' and 'forget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells the agent when to use this tool: to retrieve context saved earlier. It also explains how to list all keys by omitting the key parameter, and provides context about retrieving from current or previous sessions, which differentiates it from sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses persistence behavior (authenticated users get persistent memory; anonymous sessions last 24 hours), which is critical behavioral context beyond what the schema provides. No mention of overwrite behavior or limits, but sufficient for most use cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no wasted words. Front-loaded with action and resource, then usage examples, then behavioral note. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple two-parameter tool with no output schema, the description is complete enough. It covers purpose, usage, and key behavioral details (persistence). Minor gap: no mention of value overwriting behavior, but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both parameters have descriptions). The description adds general semantics about saving findings and preferences but does not add specific meaning beyond the schema's parameter descriptions. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores a key-value pair in session memory, with a specific verb ('store') and resource ('key-value pair in session memory'). It distinguishes itself from siblings like 'recall' (retrieving) and 'forget' (deleting), providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage context: saving intermediate findings, user preferences, or context across tool calls. It does not mention when not to use or alternatives, but the context is clear enough to guide appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

usa_recipient_profileAInspect

Get a contractor's complete federal spending history within a date range. Returns all contract awards and total amounts. Use to research supplier relationships and contract activity.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results (1-100, default 10)
end_dateYesEnd date in YYYY-MM-DD format
start_dateYesStart date in YYYY-MM-DD format
recipient_nameYesRecipient/contractor name to search for (e.g., "Lockheed Martin")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries full burden. It clearly states the tool returns contract awards within a date range, implying a read-only, non-destructive operation. The behavior is well-described without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the purpose and then detailing scope. It is concise but could be slightly more structured; no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains what the tool returns (contract awards for a named recipient within a date range). The required parameters are all documented. It is complete enough for its complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds no additional meaning beyond what the schema provides for each parameter, hence a baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('contractor or recipient's federal spending profile'), and clarifies the scope ('All contract awards within a date range'). It distinguishes itself from siblings like usa_award_search and usa_spending_by_agency, which focus on broader or different spending views.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving a named recipient's spending, but does not explicitly state when to use this versus other tools like usa_award_search or usa_spending_by_agency. The date range requirement is clear, but no exclusions or alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

usa_spending_by_agencyBInspect

Break down federal spending by agency for a fiscal year (optionally by quarter). Returns spending amounts per agency. Use when analyzing budget distribution across government.

ParametersJSON Schema
NameRequiredDescriptionDefault
quarterNoFiscal quarter (1-4). Omit for full year.
fiscal_yearNoFour-digit fiscal year (e.g., "2025"). Defaults to current year.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must disclose behavioral traits. It states that the tool returns spending per agency, but does not mention any side effects, rate limits, authentication needs, or data freshness. However, as a read-only query tool, the description is adequate. It could mention that data is from USAspending.gov and may be delayed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at two sentences, with the key purpose in the first sentence. It is front-loaded and contains no filler. It could be slightly more structured (e.g., listing parameters), but it is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple query nature (two optional parameters) and no output schema, the description is mostly complete. However, it lacks details about the response format (e.g., list of agencies with amounts) and whether totals or breakdowns are provided. Still, it is functional for an agent to decide to use it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning the input schema already describes both parameters well. The description adds no extra meaning beyond what the schema provides, which is acceptable. Baseline 3 is appropriate since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool gets federal spending breakdown by agency for a given fiscal year and optional quarter. It specifies the verb 'Get', the resource 'federal spending breakdown by agency', and the scope 'for a given fiscal year and optional quarter'. While it distinguishes from siblings like usa_spending_by_category and usa_spending_trends, it could be more explicit about the distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use it (for agency-level spending breakdowns) but does not explicitly state when not to use it or provide alternatives among siblings. Given the sibling tools (e.g., usa_award_search, usa_recipient_profile), some guidance would help, but the purpose is clear enough for an agent to infer.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

usa_spending_by_categoryAInspect

Analyze federal spending by industry, product/service, recipient, or agency. Returns spending totals per category. Use for market research and identifying government contracting opportunities.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results (1-100, default 10)
agencyNoOptional awarding agency name filter
categoryYesCategory to group by: naics, psc, recipient, awarding_agency, awarding_subagency
end_dateYesEnd date in YYYY-MM-DD format
keywordsNoOptional keywords to filter spending
start_dateYesStart date in YYYY-MM-DD format
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must convey behavioral traits. It states the breakdown categories and mentions 'market analysis', but does not disclose whether the tool is read-only, potential rate limits, or that it returns aggregated data (not individual awards). The description is adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the main action, and each sentence adds value. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description does not explain what the response contains (e.g., aggregated totals, counts). It also does not specify that the categories are mutually exclusive or how grouping works. With 6 parameters and 3 required, the description covers the main purpose but leaves some behavioral details unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so all parameters are described in the schema. The description adds value by listing the category options in prose, which reinforces the schema's enum-like list. However, it does not provide additional semantics beyond what the schema already offers for parameters like 'limit', 'agency', etc. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: 'Get federal spending broken down by category', listing specific category types (NAICS code, PSC, recipient, etc.). It differentiates from siblings like 'usa_spending_by_agency' which groups by agency only, and 'usa_spending_trends' which likely focuses on trends over time.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes it's 'Useful for market analysis', implying a business intelligence use case. However, it does not provide explicit guidance on when to use this tool versus alternatives like 'usa_award_search' or 'usa_spending_trends'. No exclusions or prerequisites are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.