Skip to main content
Glama

govcon_search_contract_awards

Search government contract awards by keyword, agency, and date range to retrieve award amounts, vendors, contract types, and NAICS codes from US, EU, and UK sources.

Instructions

Search government contract awards by keyword, agency, and date range. Returns award amounts, incumbent vendors, contract types, and NAICS codes in AI-Ready Markdown. Verified source: USASpending.gov + SAM.gov (US) · EU TED (EU) · Find-a-Tender (UK). Data freshness: 4-hour cache. Token-efficient. jurisdiction: 'US', 'EU', 'UK'. Default: 'US'. Example: search_contract_awards('cybersecurity', 'Department of Defense', '2024-01-01', 'US')

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
keywordYes
agencyNo
date_fromNo
jurisdictionNoUS

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main handler function for search_contract_awards. An async function decorated with @mcp.tool(), @with_timeout, and @verify_entitlement('T18'). Takes keyword, agency, date_from, and jurisdiction params. Queries USASpending.gov (US), EU TED (EU), or UK Find-a-Tender (UK) based on jurisdiction. Returns structured dict with awards list, markdown table, disclaimer, and standard response fields. Caches for 4 hours.
    # DATA TOOL 1 — search_contract_awards
    # ══════════════════════════════════════════════════════════════════════════════
    
    @mcp.tool()
    @with_timeout
    @verify_entitlement("T18")
    async def search_contract_awards(
        keyword: str,
        agency: str = "",
        date_from: str = "",
        jurisdiction: str = "US",
    ) -> dict:
        """Use this to search government contract awards by keyword or agency.
        Provide a keyword, optional agency name, and optional date range.
        Returns matching awards with values, recipients, and award dates."""
        kw_clean     = keyword.strip()
        agency_clean = agency.strip()
        date_clean   = date_from.strip()
        juris_clean  = jurisdiction.strip().upper()
        params = {
            "keyword": kw_clean, "agency": agency_clean,
            "date_from": date_clean, "jurisdiction": juris_clean,
        }
    
        async with AuditContext("T18", params, "1.0") as _:
            _incr_calls("T18")
            phash = make_params_hash(params)
    
            cached = get_cached("T18", phash)
            if cached:
                return cached
    
            awards: list[dict] = []
            source_used = ""
            upstream_err = ""
    
            # ── USASpending.gov (US) ──────────────────────────────────────────────
            if juris_clean == "US" and not is_tripped("usaspending"):
                try:
                    payload = _usaspending_payload([kw_clean], agency_clean, date_clean)
                    async with httpx.AsyncClient(timeout=_HTTP_TIMEOUT, headers=_HEADERS) as client:
                        resp = await client.post(
                            f"{USASPENDING_URL}/search/spending_by_award/",
                            json=payload,
                        )
                        resp.raise_for_status()
                        data = resp.json()
                        awards = [_parse_usaspending_result(r) for r in data.get("results", [])[:10]]
                        source_used = "USASpending.gov"
                        record_success_sync("usaspending")
                except Exception as exc:
                    log.warning("USASpending search_contract_awards failed: %s", exc)
                    record_failure_sync("usaspending")
                    upstream_err = str(exc)
    
            # ── EU TED (EU) ───────────────────────────────────────────────────────
            elif juris_clean == "EU" and not is_tripped("eu_ted"):
                try:
                    payload = {
                        "query": kw_clean,
                        "fields": ["title", "contracting-authority", "estimated-value",
                                   "publication-date", "deadline", "cpv"],
                        "pageSize": 10, "page": 0,
                    }
                    async with httpx.AsyncClient(timeout=_HTTP_TIMEOUT, headers=_HEADERS) as client:
                        resp = await client.post(EU_TED_URL, json=payload)
                        resp.raise_for_status()
                        data = resp.json()
                        for notice in data.get("notices", [])[:10]:
                            awards.append({
                                "award_id":    notice.get("id", ""),
                                "recipient":   notice.get("contracting-authority", {}).get("name", ""),
                                "amount":      notice.get("estimated-value", {}).get("value"),
                                "agency":      notice.get("contracting-authority", {}).get("name", ""),
                                "award_type":  notice.get("notice-type", ""),
                                "naics_code":  notice.get("cpv", ""),
                                "start_date":  notice.get("publication-date", ""),
                                "end_date":    notice.get("deadline", ""),
                                "description": notice.get("title", ""),
                            })
                        source_used = "EU TED"
                        record_success_sync("eu_ted")
                except Exception as exc:
                    log.warning("EU TED search_contract_awards failed: %s", exc)
                    record_failure_sync("eu_ted")
                    upstream_err = str(exc)
    
            # ── UK Find-a-Tender ──────────────────────────────────────────────────
            elif juris_clean == "UK" and not is_tripped("uk_find_a_tender"):
                try:
                    uk_params = {"q": kw_clean, "limit": 10}
                    if agency_clean:
                        uk_params["buyerName"] = agency_clean
                    async with httpx.AsyncClient(timeout=_HTTP_TIMEOUT, headers=_HEADERS) as client:
                        resp = await client.get(UK_FAT_URL, params=uk_params)
                        resp.raise_for_status()
                        data = resp.json()
                        for release in data.get("releases", [])[:10]:
                            tender = release.get("tender", {})
                            buyer  = release.get("buyer", {})
                            awards.append({
                                "award_id":    release.get("id", ""),
                                "recipient":   buyer.get("name", ""),
                                "amount":      tender.get("value", {}).get("amount"),
                                "agency":      buyer.get("name", ""),
                                "award_type":  tender.get("procurementMethod", ""),
                                "naics_code":  "",
                                "start_date":  tender.get("tenderPeriod", {}).get("startDate", ""),
                                "end_date":    tender.get("tenderPeriod", {}).get("endDate", ""),
                                "description": tender.get("title", ""),
                            })
                        source_used = "UK Find-a-Tender"
                        record_success_sync("uk_find_a_tender")
                except Exception as exc:
                    log.warning("UK FAT search_contract_awards failed: %s", exc)
                    record_failure_sync("uk_find_a_tender")
                    upstream_err = str(exc)
    
            # Graceful empty
            if not awards:
                note = f"\n\n*No awards found. {upstream_err[:120] if upstream_err else 'Try broadening the keyword.'}*"
                md = f"""## Contract Awards: {kw_clean} ({juris_clean})
    
    {note}
    
    **Source:** {source_used or 'unavailable'}
    
    {DISCLAIMER}"""
                _validate_canary(md)
                return {
                    "keyword": kw_clean, "jurisdiction": juris_clean,
                    "count": 0, "awards": [],
                    "source": source_used, "markdown": md, "disclaimer": DISCLAIMER,
                    **standard_response_fields("T18", phash, "1.0"),
                }
    
            rows = []
            for a in awards:
                amt   = _fmt_amount(a.get("amount"))
                recip = (a.get("recipient") or "—")[:45]
                ag    = (a.get("agency") or "—")[:35]
                naics = a.get("naics_code") or "—"
                atype = a.get("award_type") or "—"
                rows.append(f"| {recip} | {ag} | {amt} | {naics} | {atype} |")
    
            table = (
                "| Recipient | Agency | Amount | NAICS | Type |\n"
                "|---|---|---|---|---|\n"
                + "\n".join(rows)
            )
            agency_note = f" · Agency: {agency_clean}" if agency_clean else ""
            date_note   = f" · From: {date_clean}" if date_clean else ""
            md = f"""## Contract Awards: {kw_clean} ({juris_clean}){agency_note}{date_note}
    
    **Source:** {source_used}  **Results:** {len(awards)}
    
    {table}
    
    {DISCLAIMER}"""
    
            _validate_canary(md)
    
            out = {
                "keyword": kw_clean, "jurisdiction": juris_clean,
                "agency": agency_clean, "date_from": date_clean,
                "count": len(awards), "awards": awards,
                "source": source_used, "markdown": md, "disclaimer": DISCLAIMER,
                **standard_response_fields("T18", phash, "1.0"),
            }
            set_cached("T18", phash, out, T18_TTL)
            return out
  • Function signature defines input parameters: keyword (str, required), agency (str, default ''), date_from (str, default ''), jurisdiction (str, default 'US'). Docstring describes usage.
    async def search_contract_awards(
        keyword: str,
        agency: str = "",
        date_from: str = "",
        jurisdiction: str = "US",
    ) -> dict:
        """Use this to search government contract awards by keyword or agency.
        Provide a keyword, optional agency name, and optional date range.
        Returns matching awards with values, recipients, and award dates."""
        kw_clean     = keyword.strip()
        agency_clean = agency.strip()
        date_clean   = date_from.strip()
        juris_clean  = jurisdiction.strip().upper()
        params = {
            "keyword": kw_clean, "agency": agency_clean,
            "date_from": date_clean, "jurisdiction": juris_clean,
        }
  • The tool is registered as an MCP tool on the govcon FastMCP sub-server. Line 15: govcon.tool()(search_contract_awards) registers the function as an MCP tool. This sub-server is then mounted in main.py at line 159 with namespace='govcon'.
    govcon = FastMCP("DataNexus GovCon")
    
    govcon.tool()(search_contract_awards)
    govcon.tool()(fetch_vendor_contract_history)
    govcon.tool()(fetch_open_solicitations)
  • The tool name 'govcon_search_contract_awards' is listed in TOOL_REGISTRY in meta.py for the search_datanexus_tools meta-tool discovery system.
    {"name": "govcon_search_contract_awards",            "task": "search government contract awards by keyword or agency"},
  • Helper function _usaspending_payload builds the API request payload for USASpending.gov. _parse_usaspending_result (lines 157-168) normalizes API responses. _fmt_amount (lines 121-126) formats dollar amounts. _validate_canary (lines 91-99) checks for injection patterns.
    def _usaspending_payload(keywords: list, agency: str = "", date_from: str = "", limit: int = 10) -> dict:
        """Build a valid USASpending spending_by_award payload."""
        filters: dict = {
            "keywords": keywords,
            "award_type_codes": _AWARD_TYPE_CODES,
        }
        if agency:
            filters["agencies"] = [{"type": "awarding", "tier": "toptier", "name": agency}]
        if date_from:
            filters["time_period"] = [{
                "start_date": date_from,
                "end_date": datetime.now(timezone.utc).strftime("%Y-%m-%d"),
            }]
        return {
            "filters": filters,
            "fields": [
                "Award ID", "Recipient Name", "Award Amount",
                "Awarding Agency", "Award Type", "NAICS Code",
                "Start Date", "End Date", "Description",
            ],
            "page": 1,
            "limit": limit,
            "sort": "Award Amount",
            "order": "desc",
            "subawards": False,
        }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description provides useful behavioral details: verified sources (USASpending.gov, SAM.gov, EU TED), data freshness (4-hour cache), token-efficiency, and jurisdiction options. However, it does not mention rate limits, authentication needs, or explicitly state read-only nature, which would have been helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is reasonably concise, covering purpose, filters, return fields, sources, and an example. However, terms like 'Token-efficient' and 'AI-Ready Markdown' add minor fluff. It could be tightened slightly but is well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 4 parameters, 0% schema coverage, no annotations, and presence of output schema, the description is self-contained and complete. It explains inputs, output fields (in Markdown), data sources, freshness, and jurisdiction, with an example. No important details are missing for correct usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, but the description explains each parameter's purpose through context and an example. It clarifies jurisdiction values ('US','EU','UK') and default, shows how to use agency and date_from, and gives a concrete example. This compensates well for the lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'Search' and the resource 'government contract awards', with specific filters (keyword, agency, date range) and return fields (award amounts, vendors, etc.). It distinguishes from sibling tools like 'govcon_fetch_open_solicitations' (solicitations) and 'govcon_fetch_vendor_contract_history' (vendor history) by being a broad search across awards.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching contract awards with filters but does not provide explicit guidance on when to use this tool over siblings or when not to use it. It lacks 'when to use' or 'alternatives' instructions, making it average.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/datanexusmcp/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server