Skip to main content
Glama
pickleton89

cBioPortal MCP Server

by pickleton89

get_study_details

Retrieve comprehensive information about a specific cancer genomics study, including details on genomic data, mutations, and clinical information from cBioPortal.

Instructions

Get detailed information for a specific cancer study.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
study_idYes

Implementation Reference

  • Core handler function that performs input validation, makes the API request to fetch study details from 'studies/{study_id}', and handles errors.
    @handle_api_errors("get study details")
    async def get_study_details(self, study_id: str) -> Dict[str, Any]:
        """
        Get detailed information for a specific cancer study.
    
        Args:
            study_id: The ID of the cancer study
    
        Returns:
            Dictionary containing study details
        """
        # Input Validation
        validate_study_id(study_id)
    
        endpoint = f"studies/{study_id}"
        try:
            study = await self.api_client.make_api_request(endpoint)
            return {"study": study}
        except Exception as e:
            return {"error": f"Failed to get study details for {study_id}: {str(e)}"}
  • Proxy handler method on the main server class that delegates to the StudiesEndpoints implementation; this is the method registered as the MCP tool.
    async def get_study_details(self, study_id: str) -> Dict[str, Any]:
        """Get detailed information for a specific cancer study."""
        return await self.studies.get_study_details(study_id)
  • Registers all MCP tools, including 'get_study_details', by dynamically adding the server instance methods to FastMCP.
    def _register_tools(self):
        """Register tool methods as MCP tools."""
        # List of methods to register as tools (explicitly defined)
        tool_methods = [
            # Pagination utilities
            "paginate_results",
            "collect_all_results",
            # Studies endpoints
            "get_cancer_studies",
            "get_cancer_types",
            "search_studies",
            "get_study_details",
            "get_multiple_studies",
            # Genes endpoints
            "search_genes",
            "get_genes",
            "get_multiple_genes",
            "get_mutations_in_gene",
            # Samples endpoints
            "get_samples_in_study",
            "get_sample_list_id",
            # Molecular profiles endpoints
            "get_molecular_profiles",
            "get_clinical_data",
            "get_gene_panels_for_study",
            "get_gene_panel_details",
        ]
    
        for method_name in tool_methods:
            if hasattr(self, method_name):
                method = getattr(self, method_name)
                self.mcp.add_tool(method)
                logger.debug(f"Registered tool: {method_name}")
            else:
                logger.warning(f"Method {method_name} not found for tool registration")
  • Input validation function for the study_id parameter, ensuring it is a non-empty string. Called within the handler.
    def validate_study_id(study_id: str) -> None:
        """
        Validate study ID parameter.
    
        Args:
            study_id: Study identifier
    
        Raises:
            TypeError: If study_id is not a string
            ValueError: If study_id is empty
        """
        if not isinstance(study_id, str):
            raise TypeError("study_id must be a string")
        if not study_id:
            raise ValueError("study_id cannot be empty")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves 'detailed information' but doesn't specify what that entails (e.g., clinical data, molecular profiles, or other metadata), whether it's a read-only operation, potential rate limits, or error handling. This leaves significant gaps for a tool in a complex domain like cancer studies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core action and resource, making it easy to parse quickly, which is ideal for conciseness in this context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of cancer study data and the lack of annotations or output schema, the description is insufficient. It doesn't explain what 'detailed information' includes, how it relates to sibling tools (e.g., 'get_clinical_data'), or potential behavioral aspects like data format or access constraints, leaving the agent with incomplete context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter with 0% description coverage, so the schema provides no semantic context. The description implies a 'specific cancer study' is needed via 'study_id,' adding minimal meaning beyond the parameter name. However, it doesn't clarify the format or source of 'study_id' (e.g., from 'get_cancer_studies'), so it only partially compensates for the low schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'detailed information for a specific cancer study,' which is specific and actionable. However, it doesn't explicitly distinguish this tool from sibling tools like 'get_cancer_studies' or 'get_multiple_studies,' which might also retrieve study information but with different scopes or formats.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'get_cancer_studies' (likely for listing studies) and 'get_multiple_studies' (likely for batch retrieval), there's no indication of context, prerequisites, or exclusions, leaving the agent to infer usage based on tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pickleton89/cbioportal-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server