Skip to main content
Glama
pickleton89

cBioPortal MCP Server

by pickleton89

get_multiple_studies

Retrieve detailed information for multiple cancer genomics studies simultaneously from cBioPortal to analyze genomic data, mutations, and clinical information.

Instructions

Get details for multiple studies concurrently.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
study_idsYes

Implementation Reference

  • The primary handler function implementing get_multiple_studies. Fetches details for multiple studies concurrently using asyncio.gather, handles errors per study, and returns a dictionary with results and metadata.
    async def get_multiple_studies(self, study_ids: List[str]) -> Dict:
        """
        Get details for multiple studies concurrently.
    
        This method demonstrates the power of async concurrency by fetching
        multiple studies in parallel, which is much faster than sequential requests.
    
        Args:
            study_ids: List of study IDs to fetch
    
        Returns:
            Dictionary mapping study IDs to their details, with metadata about the operation
        """
        if not study_ids:
            return {
                "studies": {},
                "metadata": {"count": 0, "errors": 0, "concurrent": True},
            }
    
        # Create a reusable async function for fetching a single study
        async def fetch_study(study_id):
            try:
                data = await self.api_client.make_api_request(f"studies/{study_id}")
                return {"study_id": study_id, "data": data, "success": True}
            except Exception as e:
                return {"study_id": study_id, "error": str(e), "success": False}
    
        # Create tasks for all study IDs and run them concurrently
        tasks = [fetch_study(study_id) for study_id in study_ids]
        start_time = time.perf_counter()
        # Use asyncio.gather to execute all tasks concurrently
        results = await asyncio.gather(*tasks)
        end_time = time.perf_counter()
    
        # Process results into a structured response
        studies_dict = {}
        error_count = 0
    
        for result in results:
            if result["success"]:
                studies_dict[result["study_id"]] = result["data"]
            else:
                studies_dict[result["study_id"]] = {"error": result["error"]}
                error_count += 1
    
        return {
            "studies": studies_dict,
            "metadata": {
                "count": len(study_ids),
                "errors": error_count,
                "execution_time": round(end_time - start_time, 3),
                "concurrent": True,
            },
        }
  • Registers 'get_multiple_studies' (line 108 in tool_methods list) as an MCP tool using FastMCP.add_tool in a loop over predefined tool methods.
    """Register tool methods as MCP tools."""
    # List of methods to register as tools (explicitly defined)
    tool_methods = [
        # Pagination utilities
        "paginate_results",
        "collect_all_results",
        # Studies endpoints
        "get_cancer_studies",
        "get_cancer_types",
        "search_studies",
        "get_study_details",
        "get_multiple_studies",
        # Genes endpoints
        "search_genes",
        "get_genes",
        "get_multiple_genes",
        "get_mutations_in_gene",
        # Samples endpoints
        "get_samples_in_study",
        "get_sample_list_id",
        # Molecular profiles endpoints
        "get_molecular_profiles",
        "get_clinical_data",
        "get_gene_panels_for_study",
        "get_gene_panel_details",
    ]
    
    for method_name in tool_methods:
        if hasattr(self, method_name):
            method = getattr(self, method_name)
            self.mcp.add_tool(method)
            logger.debug(f"Registered tool: {method_name}")
        else:
            logger.warning(f"Method {method_name} not found for tool registration")
  • Thin wrapper/delegator in the main server class that forwards the get_multiple_studies call to the StudiesEndpoints instance (self.studies).
    async def get_multiple_studies(self, study_ids: List[str]) -> Dict:
        """Get details for multiple studies concurrently."""
        return await self.studies.get_multiple_studies(study_ids)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states the basic operation. It doesn't disclose behavioral traits like whether this is a read-only operation, rate limits, authentication requirements, error handling for invalid IDs, or what 'details' include. The mention of 'concurrently' hints at performance but lacks specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core purpose and appropriately sized for the tool's apparent simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema coverage, and no output schema, the description is inadequate. It doesn't explain what 'details' include, how results are returned, error conditions, or performance implications of 'concurrently', leaving significant gaps for a tool that presumably returns structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but adds no parameter information. It doesn't explain what 'study_ids' should contain (e.g., format, source, valid ranges), leaving the single parameter undocumented beyond the schema's basic type definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get details') and resource ('multiple studies'), specifying the concurrent nature. It distinguishes from single-study tools like 'get_study_details' by emphasizing multiple studies, though it doesn't explicitly differentiate from other multi-entity tools like 'get_multiple_genes'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_study_details' for single studies or 'search_studies' for filtered searches. It mentions 'concurrently' but doesn't explain why this matters or when batch retrieval is preferable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pickleton89/cbioportal-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server