Usaspending
Server Details
USAspending MCP — Federal spending data from USAspending.gov API
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-usaspending
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 10 of 10 tools scored. Lowest: 2.9/5.
Tools are generally distinct, but ask_pipeworx overlaps with the usa_* tools since it can answer spending questions, potentially causing confusion. discover_tools is useful for tool discovery but its purpose is clear.
Memory tools (remember, recall, forget) follow a simple verb pattern, while USA spending tools use usa_ prefix with descriptive names. ask_pipeworx and discover_tools break the pattern, mixing plain English and technical prefixes.
10 tools is appropriate for a server combining memory functions and federal spending queries. The count is reasonable and not overwhelming.
USA spending tools cover search, recipient profile, agency breakdown, category breakdown, and trends, which is fairly complete for common queries. However, missing features like saving or comparing searches are gaps. Memory tools are complete for key-value storage.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It explains that the tool internally selects the right tool and fills arguments, which is useful behavioral context. However, it does not disclose limitations, potential errors, or what happens if no suitable data source is found, leaving some uncertainty.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at two sentences plus examples. It front-loads the core purpose and provides concrete examples, with no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter and no output schema, the description adequately explains what the tool does and how to use it. The examples cover different types of queries. A minor gap is that it doesn't mention that the answer may be sourced from a specific sibling tool, but this is implied by 'best available data source'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter 'question'. The description adds meaning by explaining that the question should be in natural language and providing examples, but the parameter description in the schema already covers the basic idea. No additional constraints or format details are needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool accepts a natural language question and returns an answer from the best available data source. It distinguishes itself from sibling tools by acting as a general-purpose query interface rather than a specific data lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use this tool: when you have a question in plain English and want the system to pick the right underlying tool. It provides examples and contrasts with browsing tools or learning schemas, implying not to use this when you need to manually specify a tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the burden. It states that the tool returns the most relevant tools with names and descriptions, which is transparent about the output. However, it does not mention any potential side effects, rate limits, or limitations, but given the read-only search nature, the description is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences: the first states the action, the second describes the output, and the third gives a usage directive. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is complete. It explains what the tool does, what it returns, and when to use it. No additional information is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description mentions the query parameter by example but does not add extra meaning beyond the schema's description. It does not elaborate on the limit parameter's behavior beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Search') and resource ('Pipeworx tool catalog'), and specifies the use case: discovering relevant tools when 500+ are available. It distinguishes itself from siblings by emphasizing it returns tool names and descriptions for selection, whereas siblings like ask_pipeworx or usa_award_search serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says to call this FIRST when 500+ tools are available, providing clear usage context. It implies this tool is for discovery, not for direct task execution, which differentiates it from sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description must disclose behavioral traits. It states deletion but doesn't mention whether the operation is irreversible, if confirmation is needed, or any side effects. The behavior is implied but not fully transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that is concise and front-loaded with the action. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (1 param, no output schema), the description is adequate but lacks details about return value or confirmation. It could mention that the operation is permanent or provide success/failure indicators.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with only one required parameter 'key', and the description mentions 'by key', aligning with the schema. No additional semantic detail is added beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and the target resource ('a stored memory'), and specifies the key parameter. It distinguishes from sibling tools like 'remember' (create) and 'recall' (retrieve), though it doesn't explicitly name them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'recall' or 'remember'. There is no mention of prerequisites (e.g., key existence) or error handling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description does not contradict the lack of annotations. It discloses the core behavior (retrieve/list) but does not add details beyond what the description and schema already imply. Since there are no annotations, the description carries the burden, but it is straightforward and adds minimal behavioral context beyond the obvious.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (two sentences) and front-loaded with the primary action. Every word serves a purpose, no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is simple (one optional parameter, no output schema, no nested objects), the description is nearly complete. It explains both retrieval and listing. However, it could mention that the return format is a string or object, but the lack of output schema makes this a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the parameter semantics are fully documented in the schema. The description adds no additional meaning beyond restating what the schema provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieve a memory by key or list all stored memories. It uses specific verbs ('retrieve', 'list') and the resource ('memory'), distinguishing it from sibling tools like 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells the agent when to use this tool: to retrieve context saved earlier. It also explains how to list all keys by omitting the key parameter, and provides context about retrieving from current or previous sessions, which differentiates it from sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses persistence behavior (authenticated users get persistent memory; anonymous sessions last 24 hours), which is critical behavioral context beyond what the schema provides. No mention of overwrite behavior or limits, but sufficient for most use cases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with no wasted words. Front-loaded with action and resource, then usage examples, then behavioral note. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple two-parameter tool with no output schema, the description is complete enough. It covers purpose, usage, and key behavioral details (persistence). Minor gap: no mention of value overwriting behavior, but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both parameters have descriptions). The description adds general semantics about saving findings and preferences but does not add specific meaning beyond the schema's parameter descriptions. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool stores a key-value pair in session memory, with a specific verb ('store') and resource ('key-value pair in session memory'). It distinguishes itself from siblings like 'recall' (retrieving) and 'forget' (deleting), providing clear differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage context: saving intermediate findings, user preferences, or context across tool calls. It does not mention when not to use or alternatives, but the context is clear enough to guide appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
usa_award_searchBInspect
Search federal contract awards by keywords, agency, date range, or industry code (e.g., '541511' for IT consulting). Returns recipient, award amount, dates, and contract type.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results (1-100, default 10) | |
| naics | No | NAICS code to filter by (e.g., "541512") | |
| agency | No | Awarding agency name (e.g., "Department of Defense") | |
| end_date | Yes | End date in YYYY-MM-DD format | |
| keywords | Yes | Search keywords (e.g., ["cybersecurity", "cloud"]) | |
| set_aside | No | Set-aside type filter | |
| start_date | Yes | Start date in YYYY-MM-DD format |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It does not disclose whether this is a read-only operation, any rate limits, or authentication requirements. The description only lists filters and return fields, lacking transparency about side effects or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences plus a bonus line for award types. Information is front-loaded (purpose first). Minor redundancy: 'Returns recipient, amount, dates, agency, and description' repeats some info from the schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with 7 parameters and no output schema, the description covers the purpose and key fields but lacks details on pagination, sorting, or how to handle large result sets. It is adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are described in the schema. The description adds some context (e.g., award type codes) but does not provide deeper semantics beyond what the schema already offers. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Search') and resource ('federal contract awards') and lists key filters (keywords, agency, date range, NAICS code) and return fields. It also distinguishes from sibling tools like 'usa_spending_by_agency' by focusing on contract awards rather than spending aggregation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states what the tool does and lists filters, but does not explicitly guide when to use this tool over siblings (e.g., when to use 'usa_spending_by_category' instead). No exclusions or alternatives are mentioned, though the description of return fields implies use for detailed contract info.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
usa_recipient_profileAInspect
Get a contractor's complete federal spending history within a date range. Returns all contract awards and total amounts. Use to research supplier relationships and contract activity.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results (1-100, default 10) | |
| end_date | Yes | End date in YYYY-MM-DD format | |
| start_date | Yes | Start date in YYYY-MM-DD format | |
| recipient_name | Yes | Recipient/contractor name to search for (e.g., "Lockheed Martin") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries full burden. It clearly states the tool returns contract awards within a date range, implying a read-only, non-destructive operation. The behavior is well-described without contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the purpose and then detailing scope. It is concise but could be slightly more structured; no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains what the tool returns (contract awards for a named recipient within a date range). The required parameters are all documented. It is complete enough for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description adds no additional meaning beyond what the schema provides for each parameter, hence a baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('contractor or recipient's federal spending profile'), and clarifies the scope ('All contract awards within a date range'). It distinguishes itself from siblings like usa_award_search and usa_spending_by_agency, which focus on broader or different spending views.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving a named recipient's spending, but does not explicitly state when to use this versus other tools like usa_award_search or usa_spending_by_agency. The date range requirement is clear, but no exclusions or alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
usa_spending_by_agencyBInspect
Break down federal spending by agency for a fiscal year (optionally by quarter). Returns spending amounts per agency. Use when analyzing budget distribution across government.
| Name | Required | Description | Default |
|---|---|---|---|
| quarter | No | Fiscal quarter (1-4). Omit for full year. | |
| fiscal_year | No | Four-digit fiscal year (e.g., "2025"). Defaults to current year. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must disclose behavioral traits. It states that the tool returns spending per agency, but does not mention any side effects, rate limits, authentication needs, or data freshness. However, as a read-only query tool, the description is adequate. It could mention that data is from USAspending.gov and may be delayed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at two sentences, with the key purpose in the first sentence. It is front-loaded and contains no filler. It could be slightly more structured (e.g., listing parameters), but it is efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple query nature (two optional parameters) and no output schema, the description is mostly complete. However, it lacks details about the response format (e.g., list of agencies with amounts) and whether totals or breakdowns are provided. Still, it is functional for an agent to decide to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning the input schema already describes both parameters well. The description adds no extra meaning beyond what the schema provides, which is acceptable. Baseline 3 is appropriate since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets federal spending breakdown by agency for a given fiscal year and optional quarter. It specifies the verb 'Get', the resource 'federal spending breakdown by agency', and the scope 'for a given fiscal year and optional quarter'. While it distinguishes from siblings like usa_spending_by_category and usa_spending_trends, it could be more explicit about the distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use it (for agency-level spending breakdowns) but does not explicitly state when not to use it or provide alternatives among siblings. Given the sibling tools (e.g., usa_award_search, usa_recipient_profile), some guidance would help, but the purpose is clear enough for an agent to infer.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
usa_spending_by_categoryAInspect
Analyze federal spending by industry, product/service, recipient, or agency. Returns spending totals per category. Use for market research and identifying government contracting opportunities.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results (1-100, default 10) | |
| agency | No | Optional awarding agency name filter | |
| category | Yes | Category to group by: naics, psc, recipient, awarding_agency, awarding_subagency | |
| end_date | Yes | End date in YYYY-MM-DD format | |
| keywords | No | Optional keywords to filter spending | |
| start_date | Yes | Start date in YYYY-MM-DD format |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description must convey behavioral traits. It states the breakdown categories and mentions 'market analysis', but does not disclose whether the tool is read-only, potential rate limits, or that it returns aggregated data (not individual awards). The description is adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the main action, and each sentence adds value. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description does not explain what the response contains (e.g., aggregated totals, counts). It also does not specify that the categories are mutually exclusive or how grouping works. With 6 parameters and 3 required, the description covers the main purpose but leaves some behavioral details unspecified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so all parameters are described in the schema. The description adds value by listing the category options in prose, which reinforces the schema's enum-like list. However, it does not provide additional semantics beyond what the schema already offers for parameters like 'limit', 'agency', etc. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'Get federal spending broken down by category', listing specific category types (NAICS code, PSC, recipient, etc.). It differentiates from siblings like 'usa_spending_by_agency' which groups by agency only, and 'usa_spending_trends' which likely focuses on trends over time.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes it's 'Useful for market analysis', implying a business intelligence use case. However, it does not provide explicit guidance on when to use this tool versus alternatives like 'usa_award_search' or 'usa_spending_trends'. No exclusions or prerequisites are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
usa_spending_trendsBInspect
Track federal spending trends over time by keywords or agency, grouped by fiscal year, quarter, or month. Returns historical spending amounts for budget forecasting.
| Name | Required | Description | Default |
|---|---|---|---|
| group | No | Time grouping: fiscal_year, quarter, or month (default fiscal_year) | |
| agency | No | Optional awarding agency name | |
| end_date | Yes | End date in YYYY-MM-DD format | |
| keywords | Yes | Keywords to track spending for (e.g., ["artificial intelligence"]) | |
| start_date | Yes | Start date in YYYY-MM-DD format |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description should disclose behavior like rate limits, data freshness, or scope (e.g., only US federal spending). It does not mention any behavioral traits beyond grouping options.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with purpose and grouping options. It is concise and to the point.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema, so description should hint at return format (e.g., time series with spending amounts). It does not describe output structure or pagination. Given 5 params and no annotations, more details would help.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so each parameter has a description. The tool description adds context about grouping (fiscal_year, quarter, month) but does not explain the meaning of 'agency' or how keywords interact with agency filtering.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets federal spending over time and returns grouped data, distinguishing it from sibling tools like usa_spending_by_agency which focuses on agency-level breakdowns. However, it could be more specific about the verb (e.g., 'Retrieve' instead of 'Get').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions it's useful for trend analysis, but does not explicitly state when to use this tool versus alternatives like usa_spending_by_agency or usa_spending_by_category. No when-not-to-use guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!