Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
CODEBRAIN_MODELNoSwitch to any model you've pulledqwen2.5-coder:14b
CODEBRAIN_TIMEOUTNoSeconds to wait for a single generation300
CODEBRAIN_OLLAMA_URLNoPoint at a remote Ollama (e.g., an inference box on your LAN)http://localhost:11434

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
codebrain_generateA

Delegate a generation task to the local Qwen-Coder model via Ollama.

Use this for bulk or routine work where a 14B local model is good enough: generating event templates, headlines, company descriptions, UI polish drafts, boilerplate, or repetitive transformations. The response is returned as raw text — review before applying.

Args: prompt: The task description or content request. system: Optional system message to steer tone / format / constraints. use_brain: If true, prepend .brain/context.md from cwd to the system prompt.

codebrain_explainA

Ask the local model to explain a snippet of code (read-only, no generation).

Useful for getting quick, token-free explanations without consuming Claude's context budget on understanding-only tasks.

Args: code: The code snippet to explain. question: The specific question to answer about the code.

codebrain_batch_generateA

Run several generation prompts in sequence and return all results.

One shared system prompt applies to every item. Prompts are processed serially (Ollama serialises on a single GPU anyway). A failure on one prompt is captured inline as [codebrain error] ... at that index, so the whole batch never aborts.

Returns a single string with per-item delimiters:

--- [0] ---
<result for prompts[0]>

--- [1] ---
<result for prompts[1]>

Args: prompts: List of prompts to run with the same system message. system: Optional shared system message. use_brain: If true, prepend .brain/context.md from cwd to the system prompt.

codebrain_polishA

Apply a targeted transform to existing text — do not regenerate from scratch.

Use this when you have a draft and want it tightened, shortened, rephrased, made more formal, translated, or similar. The system prompt forces the model into transform-mode: it must preserve meaning and structure and only apply the requested change.

Args: text: The existing text to polish. instructions: What transformation to apply (e.g. "shorten to 2 lines", "make tone more formal", "translate to German"). use_brain: If true, prepend .brain/context.md from cwd to the system prompt.

codebrain_scan_fileA

Generate or refresh the <path>.brain summary file for a source file.

Reads the source at path, computes its SHA256, and compares to the existing .brain file's source_hash frontmatter. If they match and force is false, generation is skipped. Otherwise Qwen produces a new brain file (Purpose / Key exports / Collaborators / Gotchas / Conventions), the output is validated against the format spec, and on validation failure one retry with a sharper instruction is attempted before giving up. No partial or broken brain files are ever written.

Format spec: .spec/brain-file-format.md.

Args: path: Path to the source file to summarise. force: If true, regenerate even when the hash matches.

codebrain_consensus_generateA

Generate N candidates, let Qwen pick the best, return the winner.

Runs prompt N times (serial — Ollama serialises on single GPU anyway), then does one additional call where Qwen is shown all candidates and asked to return the best one verbatim. Useful for high-variance tasks where a single shot drifts but majority-vote style sampling tightens quality at the cost of N+1 inference calls.

Args: prompt: The task description or content request. system: Optional system message to steer tone / format / constraints. n: Number of candidates to generate (default 3, clamped to [2, 5]). use_brain: If true, prepend .brain/context.md to the system prompt.

codebrain_generate_verifiedA

Generate with verifier loop — enforces word limits and regex schemas.

Runs codebrain_generate, then checks the output against the requested constraints. On failure, retries with a tightened instruction that names the specific problem. Gives up after max_retries attempts and returns the last output with a [codebrain warning] ... prefix.

Args: prompt: The task description or content request. system: Optional system message to steer tone / format / constraints. min_words: Minimum output word count (None = unbounded). max_words: Maximum output word count (None = unbounded). must_match: Regex pattern the output must match (re.search semantics). max_retries: Max retry attempts on verification failure (default 2). use_brain: If true, prepend .brain/context.md to the system prompt.

codebrain_initA

Seed .brain/context.md for a repo — one-time setup before scanning.

Detects the stack (python / js / ts / rust / go / java) from marker files, counts source-file extensions, asks Qwen for a short overview, and writes .brain/context.md with a pre-populated template. The user is expected to edit the ## Notes for Claude section afterwards. Idempotent: existing context.md is not overwritten unless force=True.

Args: root: Directory to initialise. force: If true, overwrite an existing .brain/context.md.

codebrain_scan_repoA

Scan every source file under root and generate/refresh its .brain file.

Walks the directory tree, filters by file extension, prunes excluded directories, and runs codebrain_scan_file on each match. Hash-gated: unchanged files skip the model call. Per-file failures do not abort the batch — they are reported at the end.

Defaults:

  • extensions: .py .js .ts .tsx .jsx .java .go .rs

  • exclude_dirs: .git .venv venv node_modules pycache dist build target

Args: root: Directory to scan recursively. force: If true, regenerate every brain file even when source hash matches. extensions: Override default source extensions (e.g. [".py", ".rb"]). exclude_dirs: Override default directory-name exclusion list.

codebrain_statusA

Report which Ollama models are available locally.

Call this to verify the local backend is reachable and discover which models the user has pulled.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Tschonsen/CodeBrain'

If you have feedback or need assistance with the MCP directory API, please join our Discord server