CodeBrain
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| CODEBRAIN_MODEL | No | Switch to any model you've pulled | qwen2.5-coder:14b |
| CODEBRAIN_TIMEOUT | No | Seconds to wait for a single generation | 300 |
| CODEBRAIN_OLLAMA_URL | No | Point at a remote Ollama (e.g., an inference box on your LAN) | http://localhost:11434 |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": false
} |
| prompts | {
"listChanged": false
} |
| resources | {
"subscribe": false,
"listChanged": false
} |
| experimental | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| codebrain_generateA | Delegate a generation task to the local Qwen-Coder model via Ollama. Use this for bulk or routine work where a 14B local model is good enough: generating event templates, headlines, company descriptions, UI polish drafts, boilerplate, or repetitive transformations. The response is returned as raw text — review before applying. Args:
prompt: The task description or content request.
system: Optional system message to steer tone / format / constraints.
use_brain: If true, prepend |
| codebrain_explainA | Ask the local model to explain a snippet of code (read-only, no generation). Useful for getting quick, token-free explanations without consuming Claude's context budget on understanding-only tasks. Args: code: The code snippet to explain. question: The specific question to answer about the code. |
| codebrain_batch_generateA | Run several generation prompts in sequence and return all results. One shared system prompt applies to every item. Prompts are processed
serially (Ollama serialises on a single GPU anyway). A failure on one
prompt is captured inline as Returns a single string with per-item delimiters: Args:
prompts: List of prompts to run with the same system message.
system: Optional shared system message.
use_brain: If true, prepend |
| codebrain_polishA | Apply a targeted transform to existing text — do not regenerate from scratch. Use this when you have a draft and want it tightened, shortened, rephrased, made more formal, translated, or similar. The system prompt forces the model into transform-mode: it must preserve meaning and structure and only apply the requested change. Args:
text: The existing text to polish.
instructions: What transformation to apply (e.g. "shorten to 2 lines",
"make tone more formal", "translate to German").
use_brain: If true, prepend |
| codebrain_scan_fileA | Generate or refresh the Reads the source at Format spec: Args: path: Path to the source file to summarise. force: If true, regenerate even when the hash matches. |
| codebrain_consensus_generateA | Generate N candidates, let Qwen pick the best, return the winner. Runs Args:
prompt: The task description or content request.
system: Optional system message to steer tone / format / constraints.
n: Number of candidates to generate (default 3, clamped to [2, 5]).
use_brain: If true, prepend |
| codebrain_generate_verifiedA | Generate with verifier loop — enforces word limits and regex schemas. Runs Args:
prompt: The task description or content request.
system: Optional system message to steer tone / format / constraints.
min_words: Minimum output word count (None = unbounded).
max_words: Maximum output word count (None = unbounded).
must_match: Regex pattern the output must match ( |
| codebrain_initA | Seed Detects the stack (python / js / ts / rust / go / java) from marker
files, counts source-file extensions, asks Qwen for a short overview,
and writes Args:
root: Directory to initialise.
force: If true, overwrite an existing |
| codebrain_scan_repoA | Scan every source file under Walks the directory tree, filters by file extension, prunes excluded
directories, and runs Defaults:
Args: root: Directory to scan recursively. force: If true, regenerate every brain file even when source hash matches. extensions: Override default source extensions (e.g. [".py", ".rb"]). exclude_dirs: Override default directory-name exclusion list. |
| codebrain_statusA | Report which Ollama models are available locally. Call this to verify the local backend is reachable and discover which models the user has pulled. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Tschonsen/CodeBrain'
If you have feedback or need assistance with the MCP directory API, please join our Discord server