Skip to main content
Glama
biocontext-ai

BioContextAI Knowledgebase MCP

Official

bc_get_efo_id_by_disease_name

Find EFO/Mondo/HP IDs for diseases to use in Open Targets queries by searching the OLS ontology with a disease name.

Instructions

Search OLS for EFO/Mondo/HP IDs related to a disease name. Use this to get EFO IDs for Open Targets queries.

Returns: dict: EFO IDs with efo_ids array containing id, label, description or error message.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
disease_nameYesDisease name to search for (e.g., 'choledocholithiasis')
sizeNoMaximum number of results to return
exact_matchNoWhether to perform exact match search

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function for the 'bc_get_efo_id_by_disease_name' tool. It queries the OLS API (EBI) to retrieve EFO, MONDO, or HP ontology IDs matching the given disease name. Includes input schema via Pydantic Annotated fields and handles errors gracefully.
    @core_mcp.tool()
    def get_efo_id_by_disease_name(
        disease_name: Annotated[str, Field(description="Disease name to search for (e.g., 'choledocholithiasis')")],
        size: Annotated[
            int,
            Field(description="Maximum number of results to return"),
        ] = 5,
        exact_match: Annotated[
            bool,
            Field(description="Whether to perform exact match search"),
        ] = False,
    ) -> Dict[str, Any]:
        """Search OLS for EFO/Mondo/HP IDs related to a disease name. Use this to get EFO IDs for Open Targets queries.
    
        Returns:
            dict: EFO IDs with efo_ids array containing id, label, description or error message.
        """
        if not disease_name:
            return {"error": "disease_name must be provided"}
    
        url = "https://www.ebi.ac.uk/ols4/api/v2/entities"
    
        params = {
            "search": disease_name,
            "size": str(size),
            "lang": "en",
            "exactMatch": str(exact_match).lower(),
            "includeObsoleteEntities": "false",
            "ontologyId": "efo",
        }
    
        def starts_with_valid_prefix(curie: str) -> bool:
            """Check if the curie starts with a valid prefix."""
            return any(curie.startswith(prefix) for prefix in ["EFO:", "MONDO:", "HP:"])
    
        try:
            response = requests.get(url, params=params)
            response.raise_for_status()
    
            data = response.json()
    
            # Check that at least one item is in elements and that appearsIn includes EFO
            if not data.get("elements") or not any(
                starts_with_valid_prefix(str(element.get("curie", ""))) for element in data["elements"]
            ):
                return {"error": "No results found"}
    
            # Extract EFO IDs and their labels
            efo_ids = [
                {
                    "id": element["curie"].replace(":", "_"),
                    "label": element["label"],
                    "description": element.get("description", ""),
                }
                for element in data["elements"]
                if starts_with_valid_prefix(str(element.get("curie", "")))
            ]
            return {"efo_ids": efo_ids}
    
        except requests.exceptions.RequestException as e:
            return {"error": f"Failed to fetch EFO IDs: {e!s}"}
  • Definition of core_mcp FastMCP instance with 'BC' prefix, used by @core_mcp.tool() decorator to register tools with 'bc_' prefix (e.g., bc_get_efo_id_by_disease_name).
    core_mcp = FastMCP(  # type: ignore
        "BC",
        instructions="Provides access to biomedical knowledge bases.",
    )
  • Imports the core_mcp server (containing the tool) into the main BioContextAI MCP app, making the tool available.
    await mcp_app.import_server(
        mcp,
        slugify(mcp.name),
    )
  • Pydantic schema definitions for tool inputs using Annotated and Field.
        disease_name: Annotated[str, Field(description="Disease name to search for (e.g., 'choledocholithiasis')")],
        size: Annotated[
            int,
            Field(description="Maximum number of results to return"),
        ] = 5,
        exact_match: Annotated[
            bool,
            Field(description="Whether to perform exact match search"),
        ] = False,
    ) -> Dict[str, Any]:
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool searches OLS and returns EFO IDs with an array structure, which adds some behavioral context. However, it doesn't mention rate limits, error handling beyond 'error message,' authentication needs, or whether it's read-only (implied but not stated). The description adds value but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences: one stating the purpose and usage, and another detailing the return format. It's front-loaded with the main action. There's minimal waste, though the return format explanation could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with three parameters), 100% schema coverage, and the presence of an output schema (implied by 'Returns: dict'), the description is reasonably complete. It covers the purpose, usage context, and return structure. However, it could benefit from more behavioral details like error cases or performance notes, but the structured data compensates well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters with descriptions. The description doesn't add any parameter-specific semantics beyond what's in the schema (e.g., it doesn't explain how 'exact_match' affects search results or provide examples for 'size'). Baseline 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches OLS for EFO/Mondo/HP IDs related to a disease name, with a specific purpose of obtaining EFO IDs for Open Targets queries. It uses specific verbs ('search', 'get') and identifies the resource (OLS database). However, it doesn't explicitly differentiate from sibling tools like 'bc_search_ontology_terms' or 'bc_get_term_details', which may have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating 'Use this to get EFO IDs for Open Targets queries,' which suggests when this tool is appropriate. However, it doesn't provide explicit guidance on when to use this versus alternatives like 'bc_search_ontology_terms' or 'bc_get_term_details' from the sibling list, nor does it mention any exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/biocontext-ai/knowledgebase-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server