Skip to main content
Glama
geneontology

Noctua MCP Server

Official
by geneontology

model_summary

Generate a summary for GO-CAM biological models to analyze structure, count components, and review predicate distributions for research insights.

Instructions

Get a summary of a GO-CAM model including counts and key information.

Args: model_id: The GO-CAM model identifier

Returns: Summary with individual count, fact count, and predicate distribution

Examples: # Get summary of a model result = model_summary("gomodel:5fce9b7300001215") # Returns: # { # "model_id": "gomodel:5fce9b7300001215", # "state": "production", # "individual_count": 42, # "fact_count": 67, # "predicate_distribution": { # "RO:0002333": 15, # enabled_by (note: not in vetted list) # "RO:0002411": 8, # causally upstream of # "BFO:0000066": 12, # occurs_in # "BFO:0000050": 5 # part_of # } # }

# Check if a model is empty
result = model_summary("gomodel:new_empty_model")
if result["individual_count"] == 0:
    print("Model is empty")

# Analyze model complexity
result = model_summary("gomodel:12345")
causal_edges = result["predicate_distribution"].get("RO:0002411", 0)
causal_edges += result["predicate_distribution"].get("RO:0002413", 0)  # provides input for
causal_edges += result["predicate_distribution"].get("RO:0002629", 0)  # directly positively regulates
causal_edges += result["predicate_distribution"].get("RO:0002630", 0)  # directly negatively regulates
print(f"Model has {causal_edges} causal relationships")

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
model_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function for the 'model_summary' tool. It fetches the GO-CAM model using BaristaClient.get_model(), then computes summary statistics: number of individuals, facts, model state, and a distribution of predicates used in facts.
    @mcp.tool()
    async def model_summary(model_id: str) -> Dict[str, Any]:
        """
        Get a summary of a GO-CAM model including counts and key information.
    
        Args:
            model_id: The GO-CAM model identifier
    
        Returns:
            Summary with individual count, fact count, and predicate distribution
    
        Examples:
            # Get summary of a model
            result = model_summary("gomodel:5fce9b7300001215")
            # Returns:
            # {
            #   "model_id": "gomodel:5fce9b7300001215",
            #   "state": "production",
            #   "individual_count": 42,
            #   "fact_count": 67,
            #   "predicate_distribution": {
            #     "RO:0002333": 15,  # enabled_by (note: not in vetted list)
            #     "RO:0002411": 8,   # causally upstream of
            #     "BFO:0000066": 12,  # occurs_in
            #     "BFO:0000050": 5    # part_of
            #   }
            # }
    
            # Check if a model is empty
            result = model_summary("gomodel:new_empty_model")
            if result["individual_count"] == 0:
                print("Model is empty")
    
            # Analyze model complexity
            result = model_summary("gomodel:12345")
            causal_edges = result["predicate_distribution"].get("RO:0002411", 0)
            causal_edges += result["predicate_distribution"].get("RO:0002413", 0)  # provides input for
            causal_edges += result["predicate_distribution"].get("RO:0002629", 0)  # directly positively regulates
            causal_edges += result["predicate_distribution"].get("RO:0002630", 0)  # directly negatively regulates
            print(f"Model has {causal_edges} causal relationships")
        """
        client = get_client()
        resp = client.get_model(model_id)
    
        if resp.validation_failed:
            return {
                "success": False,
                "error": "Validation failed",
                "reason": resp.validation_reason,
                "model_id": model_id
            }
    
        if resp.error:
            return {
                "success": False,
                "error": "Failed to retrieve model",
                "reason": resp.error,
                "model_id": model_id
            }
    
        # Extract summary information
        individuals = resp.individuals
        facts = resp.facts
    
        # Count predicates
        predicate_counts: Dict[str, int] = {}
        for fact in facts:
            # fact is now a Pydantic Fact object, not a dict
            pred = fact.property if hasattr(fact, 'property') else "unknown"
            predicate_counts[pred] = predicate_counts.get(pred, 0) + 1
    
        # Get model state if available
        model_state = resp.model_state
    
        return {
            "success": True,
            "model_id": model_id,
            "state": model_state,
            "individual_count": len(individuals),
            "fact_count": len(facts),
            "predicate_distribution": predicate_counts,
        }
  • Helper function to lazily initialize and return the shared BaristaClient instance used by model_summary and other tools.
    def get_client() -> BaristaClient:
        """Get or create the Barista client instance."""
        global _client
        if _client is None:
            _client = BaristaClient()
        return _client
  • FastMCP tool registration decorator for the model_summary handler.
    @mcp.tool()
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes what the tool returns (summary with counts and predicate distribution) and includes examples showing output structure and potential use cases. However, it lacks details on error handling, rate limits, authentication needs, or whether it's a read-only operation (implied by 'Get' but not explicit). The examples add some behavioral context but leave gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with a clear purpose statement, but it's overly long due to extensive examples (over half the text). While examples are helpful, they could be more concise or separated. The structure includes sections (Args, Returns, Examples), which is good, but the verbosity reduces efficiency. Some sentences in examples don't directly aid tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 1 parameter, no annotations, and an output schema exists (implied by context signals), the description is reasonably complete. It explains the purpose, parameter, return values, and provides usage examples. However, it lacks guidance on tool selection among siblings and some behavioral details (e.g., error cases). The output schema likely covers return structure, reducing the description's burden.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter with 0% description coverage, so the description must compensate. It provides a clear definition: 'model_id: The GO-CAM model identifier' and includes examples with specific values (e.g., 'gomodel:5fce9b7300001215'), adding meaningful context beyond the bare schema. This adequately covers the single parameter, though it doesn't explain format constraints beyond the examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get a summary of a GO-CAM model including counts and key information.' It specifies the verb ('Get a summary') and resource ('GO-CAM model'), but doesn't explicitly differentiate it from sibling tools like 'get_model' or 'get_model_variables', which might also retrieve model information. The purpose is specific but lacks sibling comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It includes examples of usage scenarios (e.g., checking if a model is empty, analyzing complexity), but these are post-invocation applications, not guidelines for selection. There's no mention of when to choose this over siblings like 'get_model' or 'get_model_variables', leaving the agent without selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/geneontology/noctua-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server