Skip to main content
Glama

codebrain_explain

Explain code snippets by answering specific questions about them, using a local model to avoid token consumption and preserve Claude's context for complex reasoning.

Instructions

Ask the local model to explain a snippet of code (read-only, no generation).

Useful for getting quick, token-free explanations without consuming Claude's context budget on understanding-only tasks.

Args: code: The code snippet to explain. question: The specific question to answer about the code.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
codeYes
questionNoWhat does this do?

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The tool handler for 'codebrain_explain'. It is decorated with @mcp.tool(), takes a 'code' snippet and an optional 'question', sends them to the Ollama backend via the chat() function, and returns the explanation.
    @mcp.tool()
    async def codebrain_explain(code: str, question: str = "What does this do?") -> str:
        """Ask the local model to explain a snippet of code (read-only, no generation).
    
        Useful for getting quick, token-free explanations without consuming
        Claude's context budget on understanding-only tasks.
    
        Args:
            code: The code snippet to explain.
            question: The specific question to answer about the code.
        """
        system = (
            "You explain code clearly and briefly. No fluff, no disclaimers. "
            "Answer the question directly."
        )
        prompt = f"{question}\n\n```\n{code}\n```"
        try:
            return await chat(prompt, system=system)
        except BackendError as exc:
            return f"[codebrain error] {exc}"
  • The @mcp.tool() decorator registers 'codebrain_explain' as an MCP tool on the FastMCP server instance.
    @mcp.tool()
  • The function signature defines the input schema: 'code' (required string) and 'question' (optional string with default). The return type is str.
    async def codebrain_explain(code: str, question: str = "What does this do?") -> str:
  • Imports the 'chat' function and 'BackendError' from the backend helper module, which are used by codebrain_explain to call Ollama.
    from .backend import BackendError, chat, list_models
    
    mcp = FastMCP("codebrain")
    
    BRAIN_CONTEXT_PATH = Path(".brain") / "context.md"
    
    
    def _load_brain_context() -> str:
        """Read `.brain/context.md` from cwd if present, else empty string."""
        try:
            return BRAIN_CONTEXT_PATH.read_text(encoding="utf-8").strip()
        except (FileNotFoundError, OSError):
            return ""
    
    
    def _compose_system(system: str, use_brain: bool) -> str:
        """Prepend project .brain context to the user-provided system prompt."""
        if not use_brain:
            return system
        brain = _load_brain_context()
        if not brain:
            return system
        header = "Project context (from .brain/context.md):\n" + brain
        return f"{header}\n\n{system}" if system else header
    
    
    @mcp.tool()
    async def codebrain_generate(prompt: str, system: str = "", use_brain: bool = True) -> str:
        """Delegate a generation task to the local Qwen-Coder model via Ollama.
    
        Use this for bulk or routine work where a 14B local model is good enough:
        generating event templates, headlines, company descriptions, UI polish
        drafts, boilerplate, or repetitive transformations. The response is
        returned as raw text — review before applying.
    
        Args:
            prompt: The task description or content request.
            system: Optional system message to steer tone / format / constraints.
            use_brain: If true, prepend `.brain/context.md` from cwd to the system prompt.
        """
        try:
            return await chat(prompt, system=_compose_system(system, use_brain))
        except BackendError as exc:
            return f"[codebrain error] {exc}"
  • The 'chat' helper function that sends the prompt to Ollama's API and returns the model's response. Used by codebrain_explain to get the explanation.
    async def chat(
        prompt: str,
        system: str = "",
        model: str | None = None,
        temperature: float = 0.2,
    ) -> str:
        """Send a single-turn chat request to Ollama and return the assistant message."""
        model = model or DEFAULT_MODEL
        messages: list[dict[str, str]] = []
        if system:
            messages.append({"role": "system", "content": system})
        messages.append({"role": "user", "content": prompt})
    
        payload = {
            "model": model,
            "messages": messages,
            "stream": False,
            "options": {"temperature": temperature},
        }
    
        try:
            async with httpx.AsyncClient(timeout=REQUEST_TIMEOUT) as client:
                response = await client.post(f"{OLLAMA_URL}/api/chat", json=payload)
                response.raise_for_status()
                data = response.json()
        except httpx.ConnectError as exc:
            raise BackendError(
                f"Cannot reach Ollama at {OLLAMA_URL} — is `ollama serve` running?"
            ) from exc
        except httpx.HTTPStatusError as exc:
            raise BackendError(
                f"Ollama returned {exc.response.status_code}: {exc.response.text}"
            ) from exc
    
        try:
            return data["message"]["content"]
        except (KeyError, TypeError) as exc:
            raise BackendError(f"Unexpected Ollama response shape: {data!r}") from exc
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses 'read-only, no generation' and mentions 'local model', but lacks details on failure modes, required permissions, or other behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is compact: two sentences plus an Args block. Every line adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 params, output schema exists), the description covers purpose and usage adequately. Parameter details are minimal but sufficient for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so description must compensate. It provides basic descriptions for 'code' and 'question', but lacks format, constraints, or examples, so it adds minimal value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'explain a snippet of code' and distinguishes from siblings with 'read-only, no generation', directly contrasting with the generation tools like codebrain_generate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It clearly indicates when to use: 'for getting quick, token-free explanations without consuming Claude’s context budget on understanding-only tasks.' It implies alternatives by stating 'no generation', but does not explicitly name siblings or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Tschonsen/CodeBrain'

If you have feedback or need assistance with the MCP directory API, please join our Discord server