Skip to main content
Glama

search_clinical_trials

Search ClinicalTrials.gov for clinical studies by condition, status, and intervention. Retrieve trial details like eligibility, phase, sponsor, and location.

Instructions

Search ClinicalTrials.gov for clinical studies. Read-only operation. No authentication required. Uses ClinicalTrials.gov v2 public API (no rate limit documented). Returns up to 10 results per call. No pagination. Returns 'No clinical trials found.' if no results match. Use for: active trials, recruiting studies, eligibility criteria, phase information, sponsor details, and trial locations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
conditionYesDisease or condition e.g. 'pediatric epilepsy', 'lung cancer'
statusNoTrial status: RECRUITING, COMPLETED, or ALLRECRUITING
interventionNoOptional drug or intervention name to narrow results
max_resultsNoNumber of trials to return, between 1 and 10

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYesFormatted list of trials with NCT ID, title, phase, status, sponsor, conditions, interventions, and eligibility criteria. Returns 'no results' message if nothing found.

Implementation Reference

  • Core handler function that queries the ClinicalTrials.gov v2 API with condition, status, intervention, and max_results parameters. Returns a list of parsed trial dicts.
    def search_clinical_trials(
        condition: str,
        status: str = "RECRUITING",
        intervention: str = "",
        max_results: int = 5,
    ) -> list[dict]:
        """Search ClinicalTrials.gov v2 API. No API key required."""
        condition = (condition or "").strip()
        if not condition:
            return []
        max_results = max(1, min(max_results, 20))
        params: dict[str, Any] = {
            "format": "json",
            "query.cond": condition,
            "pageSize": max_results,
            "countTotal": "true",
        }
        if intervention:
            params["query.intr"] = intervention.strip()
        if status and status.upper() != "ALL":
            params["filter.overallStatus"] = status.upper()
    
        try:
            r = requests.get(CT_BASE, params=params, timeout=15)
            r.raise_for_status()
            data = r.json()
        except Exception as e:
            raise RuntimeError(f"ClinicalTrials.gov search failed: {e}") from e
    
        studies = data.get("studies") or []
        return [t for t in (_parse_trial(s) for s in studies) if t]
  • Parses the raw API response (study dict) into a structured dict with fields like nct_id, title, status, phase, conditions, interventions, eligibility criteria, locations, etc.
    def _parse_trial(study: dict) -> dict | None:
        try:
            proto = study.get("protocolSection") or {}
            id_mod = proto.get("identificationModule") or {}
            status_mod = proto.get("statusModule") or {}
            design_mod = proto.get("designModule") or {}
            desc_mod = proto.get("descriptionModule") or {}
            elig_mod = proto.get("eligibilityModule") or {}
            sponsor_mod = proto.get("sponsorCollaboratorsModule") or {}
            conditions_mod = proto.get("conditionsModule") or {}
            interventions_mod = proto.get("armsInterventionsModule") or {}
            contacts_mod = proto.get("contactsLocationsModule") or {}
            outcomes_mod = proto.get("outcomesModule") or {}
    
            primary_outcomes = outcomes_mod.get("primaryOutcomes") or []
            primary_outcome = primary_outcomes[0].get("measure", "") if primary_outcomes else ""
            if len(primary_outcome) > 300:
              primary_outcome = primary_outcome[:297] + "..."
    
            secondary_outcomes_list = outcomes_mod.get("secondaryOutcomes") or []
            secondary_outcome_strs = [
              o.get("measure", "") for o in secondary_outcomes_list[:3] if isinstance(o, dict)
            ]
            secondary_outcome = "; ".join(secondary_outcome_strs)
            if len(secondary_outcome) > 300:
              secondary_outcome = secondary_outcome[:297] + "..."
    
            nct_id = _ct_get(id_mod, "nctId")
            if not nct_id:
                return None
    
            interventions = interventions_mod.get("interventions") or []
            intervention_names = "; ".join(
                _ct_get(i, "name") for i in interventions if isinstance(i, dict)
            )
            locations = contacts_mod.get("locations") or []
            location_strs = []
            for loc in locations[:3]:
                if not isinstance(loc, dict):
                    continue
                parts = [_ct_get(loc, "facility"), _ct_get(loc, "city"), _ct_get(loc, "country")]
                loc_str = ", ".join(p for p in parts if p)
                if loc_str:
                    location_strs.append(loc_str)
    
            criteria = _ct_get(elig_mod, "eligibilityCriteria")
            if len(criteria) > 600:
                criteria = criteria[:597] + "..."
    
            summary = _ct_get(desc_mod, "briefSummary")
            if len(summary) > 400:
                summary = summary[:397] + "..."
    
            return {
                "nct_id": nct_id,
                "title": _ct_get(id_mod, "briefTitle"),
                "status": _ct_get(status_mod, "overallStatus"),
                "phase": "; ".join(design_mod.get("phases") or []),
                "conditions": "; ".join(conditions_mod.get("conditions") or []),
                "interventions": intervention_names,
                "brief_summary": summary,
                "eligibility_criteria": criteria,
                "min_age": _ct_get(elig_mod, "minimumAge"),
                "max_age": _ct_get(elig_mod, "maximumAge"),
                "sex": _ct_get(elig_mod, "sex"),
                "sponsor": _ct_get(sponsor_mod, "leadSponsor", "name"),
                "start_date": _ct_get(status_mod, "startDateStruct", "date"),
                "locations": location_strs,
                "url": f"https://clinicaltrials.gov/study/{nct_id}",
                "primary_outcome": primary_outcome,
                "secondary_outcomes": secondary_outcome,
            }
        except (KeyError, TypeError, AttributeError):
            return None
  • Registration of the 'search_clinical_trials' tool with the FastMCP server, including description and output schema.
    @mcp.tool(
        description=(
            "Search ClinicalTrials.gov for clinical studies. "
            "Read-only operation. No authentication required. "
            "Uses ClinicalTrials.gov v2 public API (no rate limit documented). "
            "Returns up to 10 results per call. No pagination. "
            "Returns 'No clinical trials found.' if no results match. "
            "Use for: active trials, recruiting studies, eligibility criteria, "
            "phase information, sponsor details, and trial locations."
        ),
        output_schema={
            "type": "object",
            "properties": {
                "result": {
                    "type": "string",
                    "description": "Formatted list of trials with NCT ID, title, phase, status, sponsor, conditions, interventions, and eligibility criteria. Returns 'no results' message if nothing found."
                }
            },
            "required": ["result"]
        }
    )
  • MCP tool wrapper function that accepts user-facing parameters with Annotated type hints, delegates to the core handler in tools.py, and formats results as a string.
    def search_clinical_trials(
        condition: Annotated[str, "Disease or condition e.g. 'pediatric epilepsy', 'lung cancer'"],
        status: Annotated[str, "Trial status: RECRUITING, COMPLETED, or ALL"] = "RECRUITING",
        intervention: Annotated[str, "Optional drug or intervention name to narrow results"] = "",
        max_results: Annotated[int, "Number of trials to return, between 1 and 10"] = 5,
    ) -> str:
        """
        Search ClinicalTrials.gov for clinical studies.
    
        Use for: active trials, recruiting studies, trial eligibility criteria,
        phase information, sponsor details, and trial locations.
    
        Args:
            condition: Disease or condition (e.g. "pediatric epilepsy", "lung cancer")
            status: Trial status — RECRUITING, COMPLETED, or ALL (default: RECRUITING)
            intervention: Optional drug or intervention name to narrow results
            max_results: Number of trials to return (1-10, default 5)
        
        Returns:
            Formatted string with NCT ID, title, phase, status, sponsor, conditions,
            interventions, and eligibility criteria for each trial.
            Returns a "no results" message if nothing is found.
            Handles API errors gracefully with descriptive error messages.
        
        Notes:
            - status defaults to RECRUITING if not specified
            - intervention is optional and narrows results when provided
            - max_results is clamped to 1-10 regardless of input
            - Requires no API key; uses ClinicalTrials.gov v2 public API
        """
        from aria_mcp_server.tools import search_clinical_trials as _search, format_trials_for_claude as _fmt
        max_results = max(1, min(max_results, 10))
        trials = _search(
            condition=condition,
            status=status,
            intervention=intervention,
            max_results=max_results,
        )
        return _fmt(trials)
  • Formats the list of trial dicts into a human-readable string for display to the user.
    def format_trials_for_claude(trials: list[dict]) -> str:
        """Format ClinicalTrials.gov results as readable text."""
        if not trials:
            return "No clinical trials found matching those criteria."
        lines = []
        for i, t in enumerate(trials, 1):
            locations_str = "; ".join(t.get("locations") or []) or "N/A"
            lines.append("\n".join([
                f"[Trial {i}]",
                f"NCT ID: {t.get('nct_id') or 'N/A'}",
                f"Title: {t.get('title') or 'N/A'}",
                f"Status: {t.get('status') or 'N/A'}",
                f"Phase: {t.get('phase') or 'N/A'}",
                f"Condition(s): {t.get('conditions') or 'N/A'}",
                f"Intervention(s): {t.get('interventions') or 'N/A'}",
                f"Sponsor: {t.get('sponsor') or 'N/A'}",
                f"Age Range: {t.get('min_age') or 'N/A'} – {t.get('max_age') or 'N/A'}",
                f"Sex: {t.get('sex') or 'N/A'}",
                f"Locations: {locations_str}",
                f"Summary: {t.get('brief_summary') or 'N/A'}",
                f"Primary Outcome: {t.get('primary_outcome') or 'N/A'}",
                f"Secondary Outcomes: {t.get('secondary_outcomes') or 'N/A'}",
                f"Eligibility: {t.get('eligibility_criteria') or 'N/A'}",
                f"URL: {t.get('url') or 'N/A'}",
                "",
            ]))
        return "\n".join(lines).strip()
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully covers behavioral traits: read-only operation, no authentication required, API source, undocumented rate limit, max 10 results, no pagination, and the specific empty response string. This goes well beyond the minimal expectations and provides complete transparency about what the tool does and its limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at about 70 words, with each sentence serving a distinct purpose: stating the action, declaring read-only and auth, noting API details, listing constraints, and suggesting use cases. It is well-structured and front-loaded with the primary verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists, the description does not need to explain return values. It covers the source, API version, authentication, request limits, pagination, empty result behavior, and appropriate use cases. For a search tool with no complex nested structures, this is comprehensive and leaves no major gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already has 100% description coverage for all 4 parameters, each with clear documentation. The description adds no new parameter-level details; it only provides a general usage list that indirectly hints at the condition parameter's scope. Per the calibration rule, when schema coverage is high, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Search ClinicalTrials.gov for clinical studies', clearly identifying the verb (search), resource (clinical trials from a specific database), and scope. It distinguishes from sibling tools (search_isrctn and search_pubmed) by naming the source, leaving no ambiguity about which database is targeted.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases: 'Use for: active trials, recruiting studies, eligibility criteria, phase information, sponsor details, and trial locations.' This gives clear context on when to invoke the tool. While it does not directly state alternatives or exclusions, the sibling tools cover different databases, so the guidance is sufficient for an agent to differentiate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pkotecha-eng/aria-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server