Skip to main content
Glama
geneontology

Noctua MCP Server

Official
by geneontology

get_annotations_for_bioentity

Retrieve Gene Ontology annotations and evidence for a specific biological entity, with options to filter by GO terms, evidence types, or functional aspects.

Instructions

Get all GO annotations (evidence) for a specific bioentity.

Args: bioentity_id: The bioentity ID (e.g., "UniProtKB:P12345") go_terms: Comma-separated GO terms to filter (includes child terms) evidence_types: Comma-separated evidence codes to filter (e.g., "IDA,IPI") aspect: GO aspect filter - "C", "F", or "P" limit: Maximum number of results (default: 100)

Returns: Dictionary containing: - bioentity_id: The queried bioentity - annotations: List of annotation results - summary: Count by aspect and evidence type

Examples: # Get all annotations for a protein get_annotations_for_bioentity("UniProtKB:P53762")

# Get only experimental evidence
get_annotations_for_bioentity(
    "UniProtKB:P53762",
    evidence_types="IDA,IPI,IMP"
)

# Get annotations for specific GO terms
get_annotations_for_bioentity(
    "UniProtKB:P53762",
    go_terms="GO:0005634,GO:0005737"
)

# Get only molecular function annotations
get_annotations_for_bioentity(
    "UniProtKB:P53762",
    aspect="F"
)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
bioentity_idYes
go_termsNo
evidence_typesNo
aspectNo
limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The primary handler function for the MCP tool 'get_annotations_for_bioentity'. It parses input parameters, queries the AmigoClient for GO annotations associated with a specific bioentity (with optional filters), computes summary statistics, and formats the response.
    async def get_annotations_for_bioentity(
        bioentity_id: str,
        go_terms: Optional[str] = None,
        evidence_types: Optional[str] = None,
        aspect: Optional[str] = None,
        limit: int = 100
    ) -> Dict[str, Any]:
        """
        Get all GO annotations (evidence) for a specific bioentity.
    
        Args:
            bioentity_id: The bioentity ID (e.g., "UniProtKB:P12345")
            go_terms: Comma-separated GO terms to filter (includes child terms)
            evidence_types: Comma-separated evidence codes to filter (e.g., "IDA,IPI")
            aspect: GO aspect filter - "C", "F", or "P"
            limit: Maximum number of results (default: 100)
    
        Returns:
            Dictionary containing:
            - bioentity_id: The queried bioentity
            - annotations: List of annotation results
            - summary: Count by aspect and evidence type
    
        Examples:
            # Get all annotations for a protein
            get_annotations_for_bioentity("UniProtKB:P53762")
    
            # Get only experimental evidence
            get_annotations_for_bioentity(
                "UniProtKB:P53762",
                evidence_types="IDA,IPI,IMP"
            )
    
            # Get annotations for specific GO terms
            get_annotations_for_bioentity(
                "UniProtKB:P53762",
                go_terms="GO:0005634,GO:0005737"
            )
    
            # Get only molecular function annotations
            get_annotations_for_bioentity(
                "UniProtKB:P53762",
                aspect="F"
            )
        """
        # Parse comma-separated lists
        go_terms_list = None
        if go_terms:
            go_terms_list = [t.strip() for t in go_terms.split(",")]
    
        evidence_list = None
        if evidence_types:
            evidence_list = [e.strip() for e in evidence_types.split(",")]
    
        try:
            with AmigoClient() as client:
                results = client.get_annotations_for_bioentity(
                    bioentity_id=bioentity_id,
                    go_terms_closure=go_terms_list,
                    evidence_types=evidence_list,
                    aspect=aspect,
                    limit=limit
                )
    
                # Calculate summary statistics
                aspect_counts: Dict[str, int] = {}
                evidence_counts: Dict[str, int] = {}
                for r in results:
                    aspect_counts[r.aspect] = aspect_counts.get(r.aspect, 0) + 1
                    evidence_counts[r.evidence_type] = evidence_counts.get(r.evidence_type, 0) + 1
    
                return {
                    "bioentity_id": bioentity_id,
                    "annotations": [
                        {
                            "go_term": r.annotation_class,
                            "go_term_label": r.annotation_class_label,
                            "aspect": r.aspect,
                            "evidence_type": r.evidence_type,
                            "evidence": r.evidence,
                            "evidence_label": r.evidence_label,
                            "reference": r.reference,
                            "assigned_by": r.assigned_by,
                            "date": r.date,
                            "qualifier": r.qualifier,
                            "annotation_extension": r.annotation_extension
                        }
                        for r in results
                    ],
                    "summary": {
                        "total": len(results),
                        "by_aspect": aspect_counts,
                        "by_evidence_type": evidence_counts
                    }
                }
    
        except Exception as e:
            return {
                "error": "Failed to get annotations",
                "message": str(e)
            }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by specifying the return format (dictionary structure) and default values (limit: 100), but doesn't mention potential limitations like rate limits, authentication requirements, or what happens when no results are found. The examples help illustrate usage but don't fully cover behavioral aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Args, Returns, Examples) and front-loads the core purpose. While comprehensive, some sentences in the parameter explanations could be more concise, but overall the structure helps the agent quickly understand the tool's functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 parameters, 0% schema description coverage, no annotations, but with output schema present, the description provides complete coverage. It explains all parameters thoroughly, documents the return structure, includes multiple usage examples, and gives enough context for the agent to use the tool effectively despite the lack of structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage, the description compensates excellently by providing detailed parameter documentation in the Args section, including examples, format specifications (e.g., 'C', 'F', or 'P' for aspect), and default values. Each of the 5 parameters is clearly explained with practical examples that go beyond basic type information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get all GO annotations') and resource ('for a specific bioentity'), distinguishing it from sibling tools like search_annotations or search_bioentities which appear to have broader search capabilities. The verb 'Get' combined with the specific resource scope makes the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool (to retrieve annotations for a specific bioentity) and includes examples showing different filtering scenarios. However, it doesn't explicitly state when NOT to use it or directly compare it to alternatives like search_annotations, which might be better for broader searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/geneontology/noctua-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server