Skip to main content
Glama
biocontext-ai

BioContextAI Knowledgebase MCP

Official

bc_get_recruiting_studies_by_location

Find recruiting clinical trials by geographic location to identify research opportunities for patients or participants. Returns paginated results with study type, phase, and condition breakdowns.

Instructions

Find recruiting clinical trials by geographic location. Returns paginated results with summary breakdowns.

Returns: dict: Studies list with summary containing search location, total studies, study type/phase/condition breakdowns, recruiting locations or error message.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
location_countryYesCountry name (e.g., 'United States', 'Germany')
location_stateNoState/province (e.g., 'California')
location_cityNoCity name
conditionNoMedical condition filter (e.g., 'cancer')
study_typeNo'INTERVENTIONAL', 'OBSERVATIONAL', or 'ALL'ALL
age_rangeNo'CHILD', 'ADULT', 'OLDER_ADULT', or 'ALL'ALL
page_sizeNoResults per page (1-1000)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The core handler function for the 'bc_get_recruiting_studies_by_location' tool. It constructs a query for recruiting clinical trials on ClinicalTrials.gov based on location (country, state, city), optional filters (condition, study_type, age_range), fetches data via API, analyzes it for summaries (breakdowns by type, phase, condition, recruiting locations), and returns enriched results.
    @core_mcp.tool()
    def get_recruiting_studies_by_location(
        location_country: Annotated[str, Field(description="Country name (e.g., 'United States', 'Germany')")],
        location_state: Annotated[Optional[str], Field(description="State/province (e.g., 'California')")] = None,
        location_city: Annotated[Optional[str], Field(description="City name")] = None,
        condition: Annotated[Optional[str], Field(description="Medical condition filter (e.g., 'cancer')")] = None,
        study_type: Annotated[Optional[str], Field(description="'INTERVENTIONAL', 'OBSERVATIONAL', or 'ALL'")] = "ALL",
        age_range: Annotated[Optional[str], Field(description="'CHILD', 'ADULT', 'OLDER_ADULT', or 'ALL'")] = "ALL",
        page_size: Annotated[int, Field(description="Results per page (1-1000)", ge=1, le=1000)] = 50,
    ) -> Union[Dict[str, Any], dict]:
        """Find recruiting clinical trials by geographic location. Returns paginated results with summary breakdowns.
    
        Returns:
            dict: Studies list with summary containing search location, total studies, study type/phase/condition breakdowns, recruiting locations or error message.
        """
        if not location_country:
            return {"error": "Location country must be provided"}
    
        # Build location query using SEARCH operator to ensure geographic coherence
        location_parts = [f"AREA[LocationCountry]{location_country}"]
    
        if location_state:
            location_parts.append(f"AREA[LocationState]{location_state}")
    
        if location_city:
            location_parts.append(f"AREA[LocationCity]{location_city}")
    
        # Combine location parts with SEARCH operator
        location_query = f"SEARCH[Location]({' AND '.join(location_parts)})"
    
        # Build main query components
        query_parts = [
            "AREA[OverallStatus]RECRUITING",  # Only recruiting studies
            location_query,
        ]
    
        if condition:
            query_parts.append(f"AREA[ConditionSearch]{condition}")
    
        if study_type and study_type != "ALL":
            query_parts.append(f"AREA[StudyType]{study_type}")
    
        if age_range and age_range != "ALL":
            query_parts.append(f"AREA[StdAge]{age_range}")
    
        # Join query parts with AND
        query = " AND ".join(query_parts)
    
        url = f"https://clinicaltrials.gov/api/v2/studies?query.term={query}&pageSize={page_size}&sort=LastUpdatePostDate:desc&format=json"
    
        try:
            response = requests.get(url)
            response.raise_for_status()
            data = response.json()
    
            # Add summary statistics and location analysis
            if "studies" in data:
                total_studies = data.get("totalCount", len(data["studies"]))
    
                # Analyze locations and conditions
                location_counts: dict[str, int] = {}
                condition_counts: dict[str, int] = {}
                study_type_counts: dict[str, int] = {}
                phase_counts: dict[str, int] = {}
    
                for study in data["studies"]:
                    # Extract study type
                    design_module = study.get("protocolSection", {}).get("designModule", {})
                    design_study_type = design_module.get("studyType", "Unknown")
                    study_type_counts[design_study_type] = study_type_counts.get(design_study_type, 0) + 1
    
                    # Extract phase
                    phases = design_module.get("phases", [])
                    if phases:
                        for phase in phases:
                            phase_counts[phase] = phase_counts.get(phase, 0) + 1
                    else:
                        phase_counts["N/A"] = phase_counts.get("N/A", 0) + 1
    
                    # Extract conditions
                    conditions_module = study.get("protocolSection", {}).get("conditionsModule", {})
                    conditions = conditions_module.get("conditions", [])
                    if conditions:
                        for cond in conditions[:3]:  # Limit to first 3 conditions
                            condition_counts[cond] = condition_counts.get(cond, 0) + 1
    
                    # Extract specific locations
                    contacts_module = study.get("protocolSection", {}).get("contactsLocationsModule", {})
                    locations = contacts_module.get("locations", [])
                    for location in locations:
                        if location.get("status") == "RECRUITING":
                            city = location.get("city", "Unknown")
                            state = location.get("state", "")
                            location_key = f"{city}, {state}" if state else city
                            location_counts[location_key] = location_counts.get(location_key, 0) + 1
    
                # Add summary to response
                data["summary"] = {
                    "search_location": {
                        "country": location_country,
                        "state": location_state,
                        "city": location_city,
                    },
                    "total_recruiting_studies": total_studies,
                    "studies_returned": len(data["studies"]),
                    "study_type_breakdown": study_type_counts,
                    "phase_breakdown": phase_counts,
                    "top_conditions": dict(sorted(condition_counts.items(), key=lambda x: x[1], reverse=True)[:10]),
                    "recruiting_locations": dict(sorted(location_counts.items(), key=lambda x: x[1], reverse=True)[:15]),
                }
    
            return data
        except requests.exceptions.RequestException as e:
            return {"error": f"Failed to fetch recruiting studies by location: {e!s}"}
  • Pydantic schema definitions for tool inputs using Annotated and Field for validation, descriptions, and defaults.
    def get_recruiting_studies_by_location(
        location_country: Annotated[str, Field(description="Country name (e.g., 'United States', 'Germany')")],
        location_state: Annotated[Optional[str], Field(description="State/province (e.g., 'California')")] = None,
        location_city: Annotated[Optional[str], Field(description="City name")] = None,
        condition: Annotated[Optional[str], Field(description="Medical condition filter (e.g., 'cancer')")] = None,
        study_type: Annotated[Optional[str], Field(description="'INTERVENTIONAL', 'OBSERVATIONAL', or 'ALL'")] = "ALL",
        age_range: Annotated[Optional[str], Field(description="'CHILD', 'ADULT', 'OLDER_ADULT', or 'ALL'")] = "ALL",
        page_size: Annotated[int, Field(description="Results per page (1-1000)", ge=1, le=1000)] = 50,
    ) -> Union[Dict[str, Any], dict]:
  • Imports the tool handler function into the clinicaltrials module namespace, enabling its registration via @tool decorator when the module is imported.
    from ._get_recruiting_studies_by_location import get_recruiting_studies_by_location
  • Imports all from clinicaltrials submodule, which triggers loading of tool handlers and their registration on the core_mcp FastMCP instance.
    from .clinicaltrials import *
  • Defines the 'BC' FastMCP server instance. Tool functions decorated with @core_mcp.tool() are registered here, and later imported into the main app with 'bc_' prefix.
    core_mcp = FastMCP(  # type: ignore
        "BC",
        instructions="Provides access to biomedical knowledge bases.",
    )
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'Returns paginated results with summary breakdowns,' which adds useful context about output format and pagination. However, it lacks details on error handling, rate limits, authentication needs, or data freshness, leaving gaps in behavioral understanding for a tool with 7 parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, with the core purpose stated first. The second sentence adds useful output details. However, the 'Returns:' section is somewhat redundant given the output schema, slightly reducing efficiency. Overall, it is well-structured with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, pagination) and the presence of an output schema, the description is reasonably complete. It covers the purpose and output format, and the schema handles parameter details. However, it lacks usage guidelines and full behavioral context, which slightly reduces completeness for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already documents all parameters thoroughly. The description does not add any parameter-specific information beyond what the schema provides, such as explaining interactions between location fields or default behaviors. This meets the baseline for high schema coverage but does not enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find recruiting clinical trials by geographic location.' It specifies the verb ('Find'), resource ('recruiting clinical trials'), and scope ('by geographic location'). However, it does not explicitly differentiate from sibling tools like 'bc_get_studies_by_condition' or 'bc_search_studies', which might have overlapping functionality, preventing a score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'Returns paginated results with summary breakdowns,' but does not specify prerequisites, exclusions, or comparisons to sibling tools such as 'bc_get_studies_by_condition' or 'bc_search_studies'. This lack of contextual guidance limits its utility for an AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/biocontext-ai/knowledgebase-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server