Skip to main content
Glama

extract_govcontracts

Read-only

Fetch US federal government contract awards from USASpending.gov to identify buying intent signals, competitive intelligence, and GTM opportunities. Search by company name, keyword, or NAICS code for award details including amounts, dates, and agencies.

Instructions

Fetch US federal government contract awards from USASpending.gov. No API key required. Search by company name (e.g. 'Palantir'), keyword (e.g. 'AI infrastructure'), or NAICS code (e.g. '541511'). Returns award amounts, dates, awarding agency, NAICS code, and contract descriptions — all timestamped. Use this to find buying intent signals (a company that just won a $5M DoD contract is actively hiring and spending), competitive intelligence, or GTM targeting.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesCompany name (e.g. 'Cloudflare'), keyword (e.g. 'machine learning'), NAICS code (e.g. '541511'), or direct USASpending API URL.
max_lengthNoMax content length

Implementation Reference

  • Registration of the extract_govcontracts tool.
    server.registerTool(
      "extract_govcontracts",
      {
        description:
          "Fetch US federal government contract awards from USASpending.gov. No API key required. Search by company name (e.g. 'Palantir'), keyword (e.g. 'AI infrastructure'), or NAICS code (e.g. '541511'). Returns award amounts, dates, awarding agency, NAICS code, and contract descriptions — all timestamped. Use this to find buying intent signals (a company that just won a $5M DoD contract is actively hiring and spending), competitive intelligence, or GTM targeting.",
        inputSchema: z.object({
          url: z.string().describe(
            "Company name (e.g. 'Cloudflare'), keyword (e.g. 'machine learning'), NAICS code (e.g. '541511'), or direct USASpending API URL."
          ),
          max_length: z.number().optional().default(6000).describe("Max content length"),
        }),
        annotations: { readOnlyHint: true, openWorldHint: true },
      },
      async ({ url, max_length }) => {
        try {
          const result = await govContractsAdapter({ url, maxLength: max_length });
          const ctx = stampFreshness(result, { url, maxLength: max_length }, "govcontracts");
          return { content: [{ type: "text", text: formatForLLM(ctx) }] };
        } catch (err) {
          return { content: [{ type: "text", text: formatSecurityError(err) }] };
        }
      }
    );
  • The implementation (handler) logic for the govcontracts tool.
    export async function govContractsAdapter(options: ExtractOptions): Promise<AdapterResult> {
      const input = (options.url ?? "").trim();
      const maxLength = options.maxLength ?? 6000;
    
      if (!input) throw new Error("Query required: company name, keyword, or NAICS code");
    
      // Direct GET endpoint (non-search URLs)
      if (input.startsWith("https://api.usaspending.gov") && !input.includes("spending_by_award")) {
        const data = await getJSON(input);
        return {
          raw: JSON.stringify(data, null, 2).slice(0, maxLength),
          content_date: new Date().toISOString(),
          freshness_confidence: "high",
        };
      }
    
      // NAICS code (6 digits)
      if (/^\d{6}$/.test(input)) {
        return searchByKeyword(input, maxLength);
      }
    
      // Company name or keyword — try recipient first, fall back to keyword
      try {
        const result = await searchByRecipient(input, maxLength);
        if (!result.raw.includes("No federal contracts found")) return result;
        const kwResult = await searchByKeyword(input, maxLength);
        if (!kwResult.raw.includes("No federal contracts found")) return kwResult;
        return result;
      } catch {
        return searchByKeyword(input, maxLength);
      }
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true, indicating safe read operations with open-world data. The description adds valuable context beyond annotations: it specifies 'No API key required' (convenience/access detail) and describes the return content ('award amounts, dates, awarding agency, NAICS code, and contract descriptions — all timestamped'). However, it doesn't mention rate limits, pagination, or error handling, which keeps it from a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by usage details and examples. Every sentence adds value: the first defines the tool, the second explains parameters and returns, and the third provides use cases. There is no redundant or vague language, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema), the description is largely complete. It covers purpose, usage, parameters, returns, and access details. However, without an output schema, it could benefit from more detail on response structure (e.g., format of returned data). The annotations help, but some behavioral aspects like error cases are omitted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds some semantic context by explaining what the 'url' parameter accepts ('company name, keyword, or NAICS code') and implying its search functionality, but it doesn't provide additional syntax or format details beyond what the schema provides. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Fetch US federal government contract awards'), resource ('from USASpending.gov'), and scope ('by company name, keyword, or NAICS code'). It distinguishes itself from sibling tools by focusing on government contracts rather than changelogs, finance data, GitHub repos, or other domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'to find buying intent signals, competitive intelligence, or GTM targeting.' It provides concrete examples (e.g., 'a company that just won a $5M DoD contract is actively hiring and spending'), which helps differentiate it from alternatives like extract_finance_landscape or extract_sec_filings that might serve overlapping but distinct purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/PrinceGabriel-lgtm/freshcontext-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server