Brain MCP
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Brain MCPcheck my inbox for messages from other Claude sessions"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Hermes Brain
Multi-agent orchestration for Hermes Agent
Spawn parallel Hermes agents. Give them a shared brain. Ship in one command.Backed by SQLite, coordinated by Python, zero tokens spent on coordination.
Install · Quick Start · How It Works · CLI · Tools · Memory Bank · Development
Install
Option 1 — bootstrap from source (recommended for Hermes)
curl -fsSL https://raw.githubusercontent.com/DevvGwardo/brain-mcp/main/install.sh | bashThe installer:
Builds the Node.js MCP server (
brain-mcp)Installs the Python orchestration package (
hermes-brain)Registers the brain as an MCP server in Hermes
Option 2 — manual install
git clone https://github.com/DevvGwardo/brain-mcp.git
cd brain-mcp
npm install
npm run build
pip install -e .
hermes mcp add brain --command node --args "$PWD/dist/index.js"Note: the npm package is not published yet, so the repository install path is the supported path for now.
Verify:
hermes mcp list | grep brain
hermes mcp test brain
hermes-brain --helpPrerequisites: Python 3.10+, Node.js 18+, Hermes Agent
Sharing with friends? Each person's brain is its own isolated SQLite DB — no network config needed. Same one-liner works anywhere.
Docker users: Spawn agents with layout: "headless" since tmux panes can't render in a headless container:
brain_wake({ task: "...", layout: "headless" })Quick Start
One command to orchestrate a fleet of Hermes agents:
hermes-brain "Build a REST API with auth, users, and posts" \
--agents api-routes auth-layer db-models testsWhat happens:
Python conductor spawns 4 background Hermes agents (
hermes -q)Each agent claims its files, publishes contracts, writes code, pulses heartbeats
Conductor runs an integration gate — compiles the project, routes errors back to responsible agents via DM
Agents self-correct. Gate retries until clean.
Summary printed: agents, contracts, memories, metrics, done.
More ways to run it:
# Auto-named agents
hermes-brain "Add error handling to the whole codebase"
# Mix models per task
hermes-brain "Build a game" --agents engine ui store --model claude-sonnet-4-5
# Cheap model for boilerplate
hermes-brain "Generate 10 test files" --model claude-haiku-4-5
# JSON pipeline with multiple phases
hermes-brain --config pipeline.jsonOr from inside Hermes (interactive):
hermes> Use register, then wake to spawn 3 agents
that each refactor a different module.How It Works
graph TB
subgraph "Python Conductor"
CLI["hermes-brain CLI"]
ORCH["Orchestrator<br/><small>spawn · wait · gate · retry</small>"]
end
subgraph "Hermes Agents"
direction LR
H1["Agent 1<br/><small>hermes -q</small>"]
H2["Agent 2<br/><small>hermes -q</small>"]
H3["Agent 3<br/><small>hermes -q</small>"]
end
CLI --> ORCH
ORCH -->|spawn| H1
ORCH -->|spawn| H2
ORCH -->|spawn| H3
subgraph "Brain (shared SQLite)"
DB[("brain.db")]
PULSE["Heartbeats"]
MX["Mutex Locks"]
KV["Shared State"]
CON["Contracts"]
MEM["Memory"]
PLAN["Task DAG"]
end
ORCH <--> DB
H1 <--> DB
H2 <--> DB
H3 <--> DB
subgraph "Integration Gate"
GATE["tsc · mypy · cargo · go vet"]
ROUTE["DM errors → agents"]
end
ORCH --> GATE
GATE --> ROUTE
ROUTE -.->|DM| H1
ROUTE -.->|DM| H2
style CLI fill:#9333EA,stroke:#7C3AED,color:#fff
style ORCH fill:#9333EA,stroke:#7C3AED,color:#fff
style H1 fill:#3B82F6,stroke:#2563EB,color:#fff
style H2 fill:#10B981,stroke:#059669,color:#fff
style H3 fill:#F59E0B,stroke:#D97706,color:#000
style DB fill:#1E293B,stroke:#334155,color:#fff
style GATE fill:#EF4444,stroke:#DC2626,color:#fffArchitecture
This diagram shows the internal architecture of brain-mcp and how its components interact:
graph TB
subgraph "External Clients"
HERMES["Hermes CLI"]
CLAUDE["Claude Code"]
ANY["Any MCP Client"]
end
subgraph "brain-mcp (Node.js)"
SERVER["src/index.ts<br/>MCP Request Router"]
CONDUCTOR["brain-conductor<br/>Zero-token Orchestration CLI"]
GATE["src/gate.ts<br/>Integration Gate"]
end
subgraph "pi-agent-core Runtime"
PI_CORE["src/pi-core-agent.ts<br/>In-process Agent Runner"]
PI_CORE_TOOLS["src/pi-core-tools.ts<br/>14 Brain Tools as AgentTools"]
PI_AGENT["pi-agent-core Agent<br/>model + tools + events"]
end
subgraph "BrainDB (SQLite)"
DB[("brain.db<br/>sessions, state, messages,<br/>claims, contracts, memory")]
end
HERMES & CLAUDE & ANY --> SERVER
SERVER <--> DB
SERVER --> CONDUCTOR
CONDUCTOR --> PI_CORE
PI_CORE --> PI_CORE_TOOLS
PI_CORE --> PI_AGENT
PI_CORE_TOOLS --> DB
PI_AGENT -->|beforeToolCall<br/>pulse| DB
CONDUCTOR --> GATE
GATE -->|DM errors| CONDUCTOR
style HERMES fill:#FF6B6B,stroke:#DC2626,color:#fff
style CLAUDE fill:#3B82F6,stroke:#2563EB,color:#fff
style ANY fill:#7C3AED,stroke:#6D28D9,color:#fff
style SERVER fill:#1E293B,stroke:#334155,color:#fff
style CONDUCTOR fill:#9333EA,stroke:#7C3AED,color:#fff
style GATE fill:#EF4444,stroke:#DC2626,color:#fff
style PI_CORE fill:#10B981,stroke:#059669,color:#fff
style PI_CORE_TOOLS fill:#059669,stroke:#047857,color:#fff
style PI_AGENT fill:#06B6D4,stroke:#0891B2,color:#fff
style DB fill:#1E293B,stroke:#334155,color:#fffpi-agent-core is the LLM agent runtime — handles the model interaction loop, tool execution, and event subscription. brain-mcp provides the coordination layer (state, messaging, heartbeats, locks, contracts) as tools that pi agents call. The conductor ties it all together with phases, gates, and tmux layout.
Zero-token coordination. The conductor is pure Python — LLM tokens are only spent on the actual work. Heartbeats, claims, contracts, gates, retries all run locally.
No server to manage. Each agent opens its own stdio connection to the brain. SQLite WAL mode handles concurrent access safely.
Same brain, any CLI. Hermes, Claude Code, MiniMax — all clients hit the same SQLite DB. A mixed fleet of Hermes + Claude agents can coordinate on the same task.
The hermes-brain CLI
hermes-brain <task> [options]Flag | Default | What it does |
|
| Agent names to spawn in parallel |
|
| Model passed to each agent |
| off | Skip integration gate |
|
| Max gate retry attempts |
|
| Per-agent timeout |
| Load a multi-phase pipeline | |
|
| Custom brain DB |
Pipeline config file
{
"task": "Build a todo app",
"model": "claude-sonnet-4-5",
"gate": true,
"max_gate_retries": 3,
"phases": [
{
"name": "foundation",
"parallel": true,
"agents": [
{ "name": "types", "files": ["src/types/"], "task": "Define all TS types" },
{ "name": "db", "files": ["src/db/"], "task": "Set up Prisma schema" }
]
},
{
"name": "feature",
"parallel": true,
"agents": [
{ "name": "api", "files": ["src/api/"], "task": "REST endpoints" },
{ "name": "ui", "files": ["src/ui/"], "task": "React components" }
]
},
{
"name": "quality",
"parallel": true,
"agents": [
{ "name": "tests", "task": "Write unit + integration tests" }
]
}
]
}Phases run sequentially. Agents within a phase run in parallel. The integration gate runs between phases.
Brain Tools
35+ tools across 12 categories. All available to Hermes, Claude Code, and any MCP-compatible agent.
Identity & Health
Tool | What it does |
| Name this session |
| List active sessions |
| Show session info + room |
| Heartbeat with status + progress (returns pending DMs) |
| Live health of all agents (status, heartbeat age, claims) |
Messaging
Tool | What it does |
| Post to a channel |
| Read from a channel |
| Direct message another agent |
| Read your DMs |
Shared State & Memory
Tool | What it does |
| Ephemeral key-value store |
| List / remove keys |
| Store persistent knowledge (survives |
| Search memories from previous sessions |
| Remove outdated memories |
File Locking
Tool | What it does |
| Lock a file/resource (TTL-based mutex) |
| Unlock |
| List active locks |
Contracts (prevents integration bugs)
Tool | What it does |
| Publish what your module provides / expects |
| Read other agents' contracts before coding |
| Validate all contracts — catches param mismatches, missing functions |
Integration Gate
Tool | What it does |
| Run compile + contract check, DM errors to responsible agents |
| Run gate in a loop, wait for fixes, retry until clean |
Task Planning (DAG)
Tool | What it does |
| Create a task DAG with dependencies |
| Get tasks whose dependencies are satisfied |
| Mark task done/failed (auto-promotes dependents) |
| Overall progress |
| Turn one natural-language goal into phases, agents, file scopes, and conductor config |
| Persist the compiled workflow into brain state + a task DAG, optionally write conductor JSON |
Orchestration
Tool | What it does |
| Spawn a new agent (hermes, claude, or headless) |
| Spawn multiple agents in one call |
| Replace a failed agent with recovery context |
| Success rates, duration, error counts per agent |
Context Ledger (prevents losing track)
Tool | What it does |
| Log action/discovery/decision/error |
| Read the ledger |
| Condensed view for context recovery |
| Save full working state |
| Recover after context compression |
Heartbeat & Contract Protocol
Every spawned agent follows two protocols that the orchestrator enforces:
Heartbeat — agents call brain_pulse every 2-3 tool calls with their status and a short progress note. The conductor uses this to:
Show live status in the terminal (
● working — editing src/api/routes.ts)Detect stalled agents (no pulse in 60s →
stale)Deliver pending DMs as pulse return values (no extra round-trip)
Contracts — before agents write code, they call brain_contract_get to see what other agents export. After writing, they publish their own contract with brain_contract_set. Before marking done, brain_contract_check validates the whole fleet — catches:
Function signature mismatches (expected 2 args, got 3)
Missing exports (agent A imports
getUserbut agent B never exported it)Type drift (expected
User, got{name, email})
This is the key to matching single-agent integration quality with a parallel fleet.
Integration Gate
sequenceDiagram
participant O as Orchestrator
participant C as Compiler
participant DB as Brain DB
participant A as Agent
O->>C: Run tsc / mypy / cargo / go vet
C-->>O: Errors with file:line:message
O->>DB: Query: who claimed this file?
DB-->>O: Agent X owned src/api/routes.ts
O->>A: DM: "Fix these errors in your files"
Note over A: Agent reads DM on next pulse
Note over A: Fixes code, pulses done
O->>C: Re-run compiler
C-->>O: Clean
O->>DB: Record metricsThe gate auto-detects the project language and runs the appropriate checker:
Language | Checker |
TypeScript |
|
Python |
|
Rust |
|
Go |
|
Errors are parsed, matched to the agent that claimed the failing file, and routed as a DM. Agents pick up their errors on the next pulse and self-correct. The loop retries up to --retries times before giving up.
Mixed Fleets
The brain DB is shared across all MCP clients. A single project can have:
graph LR
subgraph "Fleet"
direction TB
HA["Hermes Agent<br/><small>fast local inference</small>"]
CC["Claude Code<br/><small>deep reasoning</small>"]
MM["MiniMax<br/><small>cheap boilerplate</small>"]
end
subgraph "Brain"
DB[("brain.db")]
end
HA <--> DB
CC <--> DB
MM <--> DB
style HA fill:#F59E0B,stroke:#D97706,color:#000
style CC fill:#9333EA,stroke:#7C3AED,color:#fff
style MM fill:#3B82F6,stroke:#2563EB,color:#fff
style DB fill:#1E293B,stroke:#334155,color:#fffRoute by task type. Use Hermes for routine work, Claude for architectural decisions, cheaper models for boilerplate — all coordinating through the same brain, sharing contracts, gates, memory.
From Claude Code:
brain_wake({ task: "...", cli: "hermes", layout: "headless" })
brain_wake({ task: "...", cli: "claude", layout: "horizontal" })Advanced
Everything below covers the full technical depth.
Performance
Run the benchmarks yourself:
node benchmark.mjs # SQLite direct layer (1000 iterations)
node benchmark-mcp.mjs # MCP tool layer (30 iterations per tool)SQLite Direct Layer (2026-04-06, M4 Pro, WAL mode)
Operation | avg | p50 | p95 | p99 | throughput |
session_register | 0.021ms | 0.011ms | 0.027ms | 0.039ms | ~47K/s |
message_post (1 msg) | 0.014ms | 0.011ms | 0.019ms | 0.031ms | ~70K/s |
message_read (50 msgs) | 0.042ms | 0.042ms | 0.045ms | 0.066ms | ~24K/s |
state_get | 0.002ms | 0.002ms | 0.002ms | 0.003ms | ~570K/s |
claim_query (all) | 0.001ms | 0.001ms | 0.002ms | 0.002ms | ~670K/s |
heartbeat_pulse (update) | 0.002ms | 0.002ms | 0.002ms | 0.003ms | ~464K/s |
session_query (by id) | 0.002ms | 0.002ms | 0.002ms | 0.003ms | ~455K/s |
Direct SQLite: every core coordination operation is sub-millisecond. The KV store (state_get) sustains ~570K reads/s. High-frequency coordination (heartbeats, claims, state) stays well under 1ms.
MCP Tool Layer (2026-04-06, stdio JSON-RPC, 30 calls each)
Tool | avg | p50 | p95 | min | max |
brain_status | 12.2ms | 12.0ms | 15.6ms | 8.8ms | 21.2ms |
brain_sessions | 1.9ms | 1.7ms | 3.6ms | 0.9ms | 4.7ms |
brain_keys | 1.6ms | 1.6ms | 2.6ms | 0.8ms | 4.5ms |
brain_claims | 2.0ms | 1.8ms | 3.4ms | 1.2ms | 4.9ms |
brain_metrics | 2.0ms | 1.9ms | 4.0ms | 1.1ms | 4.4ms |
MCP tool calls include JSON-RPC framing, stdio IPC, TypeScript tool dispatch, and SQLite query. Most tools respond in 1-2ms once the server is warm. brain_status is slower (12ms) because it aggregates session data from all rooms — 3000+ sessions were present during the benchmark.
What this means in practice
High-frequency coordination (heartbeats every 2-3 agent turns, claim/release, state get/set): always goes through Python
hermes.db.BrainDBdirectly — not the MCP layer. Sub-millisecond, no stdio overhead.Agent-level operations (spawn, gate, contract check, swarm): use MCP tools. 1-5ms per call is fine — these happen once per agent, not per turn.
Zero-token coordination overhead: the entire coordination layer (messaging, locking, state, heartbeats) adds no LLM token cost. Tokens are only spent on actual work.
Architecture Deep Dive
graph TB
subgraph "MCP Clients"
HA["hermes sessions"]
CC["claude sessions"]
PY["Python orchestrator"]
end
subgraph "MCP Layer"
M1["brain-mcp<br/><small>stdio server</small>"]
end
subgraph "Python API"
PYDB["hermes.db.BrainDB<br/><small>direct SQLite access</small>"]
end
subgraph "Storage"
DB[("~/.claude/brain/brain.db<br/><small>SQLite WAL</small>")]
end
HA --> M1
CC --> M1
PY --> PYDB
M1 --> DB
PYDB --> DB
subgraph "Tables"
T1["sessions · messages · dms"]
T2["state · claims · contracts"]
T3["memory · plans · metrics"]
T4["context_ledger · checkpoints"]
end
DB --- T1
DB --- T2
DB --- T3
DB --- T4
style HA fill:#F59E0B,stroke:#D97706,color:#000
style CC fill:#9333EA,stroke:#7C3AED,color:#fff
style PY fill:#3776AB,stroke:#2C5F8D,color:#fff
style DB fill:#10B981,stroke:#059669,color:#fffDesign decisions:
Dual access paths — Agents use MCP (stdio) via
brain-mcp. The Python orchestrator useshermes.db.BrainDBfor direct, fast access to the same SQLite file.One process per session — No long-running daemon. Each agent opens its own stdio.
WAL mode + 5s busy timeout — Multiple writers serialize safely.
Heartbeat-based liveness — Agents dead in 60s = stale, dead in 5m = cleaned up.
Room scoping — Working directory is the default room. Override with
BRAIN_ROOM.
Spawned Agent Lifecycle (Hermes Headless)
stateDiagram-v2
[*] --> Spawned: hermes -q &
Spawned --> Initializing: MCP connected
Initializing --> Registered: brain_register
Registered --> ReadingContext: brain_get / brain_recall
ReadingContext --> CheckingContracts: brain_contract_get
state "Working Loop" as Loop {
CheckingContracts --> Claiming: brain_claim files
Claiming --> Editing: make changes
Editing --> Pulsing: brain_pulse (every 2-3 calls)
Pulsing --> ReadingDMs: DMs returned in pulse
ReadingDMs --> Editing: fix errors if any
Editing --> Publishing: brain_contract_set
}
Publishing --> FinalCheck: brain_contract_check
FinalCheck --> Publishing: mismatches found
FinalCheck --> Done: clean
Done --> Releasing: brain_release all files
Releasing --> Reporting: brain_pulse status=done
Reporting --> Exited: process ends
Exited --> [*]Auto-Recovery
If an agent crashes or goes stale, the orchestrator spawns a replacement with full context:
sequenceDiagram
participant O as Orchestrator
participant DB as Brain DB
participant R as Replacement
Note over O,DB: Agent X went stale (no pulse 60s+)
O->>DB: Get X's progress, claims, messages
DB-->O: "was editing src/api, claimed 3 files"
O->>DB: Release X's claims
O->>DB: Record failure metric
O->>R: Spawn "X-r4521" with recovery prompt:
Note over R: "You're replacing X.<br/>Last progress: 'editing routes.ts'.<br/>Pick up where they left off."
R->>DB: brain_register, brain_claim, continueThe replacement inherits the original task, knows what files the failed agent touched, and has context about their last known progress.
Database Schema
erDiagram
sessions ||--o{ messages : sends
sessions ||--o{ direct_messages : sends
sessions ||--o{ claims : owns
sessions ||--o{ contracts : publishes
sessions ||--o{ pulses : heartbeats
sessions ||--o{ context_ledger : logs
sessions ||--o{ checkpoints : saves
sessions ||--o{ metrics : records
sessions { text id PK text name text room text status text progress text last_heartbeat }
messages { int id PK text channel text room text sender text content text created_at }
direct_messages { int id PK text from_id text to_id text content bool read }
state { text key PK text scope text value text updated_by }
claims { text resource PK text owner_id text expires_at }
contracts { text module PK text agent_id json provides json expects }
memory { text id PK text room text topic text content text tags }
plans { text id PK text room json tasks json dependencies }
metrics { int id PK text agent_name text outcome int duration_ms }
context_ledger { int id PK text agent_id text entry_type text content text file_path }
checkpoints { text id PK text agent_id json working_state text summary }Database location: ~/.claude/brain/brain.db
Configuration Reference
Variable | Default | Description |
|
| Pre-set session name |
| uuid | Pre-set session id (used by orchestrator) |
| Working directory | Override room grouping |
|
| Custom database path |
|
| Default CLI for |
| Model passed to spawned hermes agents |
Using Brain Tools Directly From Hermes
If you don't want the Python CLI, you can orchestrate directly from inside a Hermes session:
hermes> register with name "lead"
hermes> set key="task" value="refactor auth" scope="room"
hermes> wake name="worker-1" task="..." cli="hermes" layout="headless"
hermes> wake name="worker-2" task="..." cli="hermes" layout="headless"
hermes> agents # monitor health
hermes> auto_gate # run gate loop until cleanIf Hermes shows namespaced picker entries such as mcp_brain_wake, use the
exact picker name. Do not prepend brain_ yourself.
The tools work identically in interactive mode, headless mode, and across mixed fleets.
Claude Code (Visible tmux Panes)
Brain also supports spawning Claude Code sessions in tmux split panes for visual orchestration:
graph TB
subgraph "Your terminal"
direction LR
L["LEAD<br/><small>purple border</small>"]
W1["worker 1<br/><small>blue</small>"]
W2["worker 2<br/><small>emerald</small>"]
W3["worker 3<br/><small>amber</small>"]
end
L -->|brain_wake| W1
L -->|brain_wake| W2
L -->|brain_wake| W3
style L fill:#0d0a1a,stroke:#9333EA,color:#fff,stroke-width:3px
style W1 fill:#0F172A,stroke:#3B82F6,color:#fff
style W2 fill:#0F172A,stroke:#10B981,color:#fff
style W3 fill:#0F172A,stroke:#F59E0B,color:#fffFrom Claude Code, say "Refactor the API with 3 agents" — the lead splits the work, spawns 3 Claude sessions in tmux panes, each with a unique colored border, and coordinates through the brain.
Layouts: headless (Hermes default), horizontal, vertical, tiled, window
Memory Bank (Persistent Context)
brain-mcp handles coordination between agents — but it doesn't hold context between waves. Subagents are spawned, do their work, post results, and exit. The orchestrator collects everything.
The problem: If you run 5 waves of agents, each new wave starts with zero memory of what happened before. The brain KV store is ephemeral.
The solution: GSD-inspired memory bank pattern. One file, one source of truth, orchestrator as memory bank.
Orchestrator
│
│ MAINTAINS: ~/.hermes/.brain/STATE.md
│
│ PER WAVE:
│ brain-export-context() → brain_set("task_context", $SLICE)
│ brain_wake(agent, goal + context)
│
│ AFTER RESULTS:
│ brain-read-results() → update STATE.md
│ brain-record-done() / brain-record-decision()
│
└── Subagents: read context, do work, post results, exitQuick Start
# 1. Source the helper script
source ~/brain-mcp/scripts/brain-memory.sh
# 2. Initialize a session
brain-init "my-project" "session-123"
# 3. Before each wave — get context slice
CTX=$(brain-export-context "auth" "fix login bug")
brain_set "task_context" "$CTX"
brain_wake "agent-1" "fix auth bug"
# 4. After results — update state
brain-record-done 1 "agent-1" "Fixed race condition in token refresh"
brain-complete-agent "agent-1"
# 5. Dump state anytime
brain-dumpState File Structure
## Session → Project, session ID, status
## Current Phase → init | planning | executing | reviewing | complete
## Orchestrator Memory → Accumulated context (the "memory")
## Agent Context → Per-agent status and work tracking
## Files Under Work → Who is editing what (claim/release)
## Session Log → Wave-by-wave history for resumeKey Principles
Principle | Why |
One file, not KV | STATE.md is the source of truth. brain KV is transport only. |
Orchestrator writes | Subagents read + propose. Orchestrator updates state. |
Slices, not dumps | Each agent gets only what it needs. Keep it lean. |
Git-diffable | STATE.md is human-readable, git-tracked, resumable. |
Persistent | Survives agent restarts. Brain KV does not. |
What's Included
skills/brain-memory-bank/ # Full skill documentation
├── SKILL.md # Pattern guide + examples
scripts/
├── brain-memory.sh # Bash helpers (source this in your workflow)
│
.brain/ # (created at runtime)
└── STATE.md # Persistent session stateBefore vs After
Without Memory Bank | With Memory Bank |
Wave 3 agent asks "what did wave 1 do?" | Reads STATE.md — knows exactly |
Orchestrator forgets blocker from wave 2 | Blockers persist in STATE.md |
No shared context between waves | Context accumulated across waves |
Agents start cold every wake | Agents get relevant context slice |
Development
# Node.js MCP server
npm run dev # watch mode
npm run build # compile TypeScript
npm start # run server
# Python orchestrator
pip install -e . # install hermes-brain
python -m hermes.cli "task" --agents a b cRepo layout:
brain-mcp/
├── src/ # TypeScript MCP server (brain-mcp)
│ ├── index.ts # Tool definitions (30+ tools)
│ ├── db.ts # SQLite layer
│ ├── conductor.ts # brain_wake / brain_swarm logic
│ └── gate.ts # Integration gate
├── hermes/ # Python orchestration (hermes-brain)
│ ├── cli.py # hermes-brain CLI entry point
│ ├── orchestrator.py # Conductor — spawn, wait, gate, retry
│ ├── db.py # Direct SQLite access (shares brain.db)
│ ├── gate.py # Compiler + contract checks
│ └── prompt.py # Agent prompt templates
├── skills/
│ └── brain-memory-bank/ # GSD-style persistent context skill
│ └── SKILL.md # Memory bank pattern documentation
├── scripts/
│ └── brain-memory.sh # Bash helpers for orchestrator workflows
├── benchmark.mjs # SQLite layer benchmark (1000 iterations)
├── benchmark-mcp.mjs # MCP tool layer benchmark (30 calls per tool)
├── setup-hermes.sh # Full installer
└── pyproject.toml # Python package configPython 3.10+ · Node.js 18+ · Hermes Agent · MCP Protocol
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/DevvGwardo/brain-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server