kje-mcp
Provides a low-latency logging tool (brain_log) that writes session events to a Supabase database.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@kje-mcpshow me the current empire status"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
kje-mcp
Claude.ai-compatible MCP server that wraps Jim Brain (persistent memory + empire state + vault) and a headless Claude Code dispatcher on the RackNerd VPS, so any Claude.ai web session can:
read the empire's current state, projects, and recent memories,
search semantic memory and the credentials vault,
write fast logs and semantic memories back to Brain,
dispatch
claude -pbuild sessions on the VPS.
The server speaks MCP Streamable HTTP (the transport Claude.ai's "Custom Connector" UI expects). It uses stateless / JSON-response mode, so it can run anywhere a normal HTTP API can — Railway, in this case.
Tools exposed
Tool | What it does |
| Brain |
| Semantic search over Qdrant memories ( |
| Project context at depth |
| Natural-language credential lookup ( |
| Fast Supabase log ( |
| Semantic memory write ( |
| Spawn headless |
All tools auth to Brain with the lowercase x-brain-key header (the only header Brain accepts — burned in 2026-04-27 BridgeDeck debugging). The MCP layer itself accepts a Bearer token in the Authorization header — that's what Claude.ai's connector UI sends.
File layout
kje-mcp/
├── main.py # FastAPI + FastMCP server (deployed to Railway)
├── requirements.txt
├── railway.toml
├── .env.example
├── README.md
└── vps/
├── cc_dispatch_server.py # Companion dispatcher — runs on the RackNerd VPS
├── requirements.txt
└── kje-cc-dispatch.service # systemd unit1. Deploy kje-mcp to Railway
1a. Push the repo to GitHub
cd C:\Users\Jim\Documents\GitHub\kje-mcp
git add .
git commit -m "feat: kje-mcp v1.0.0 — Claude.ai MCP wrapper for Jim Brain + CC dispatch"
gh repo create jharriGH/kje-mcp --public --source=. --remote=origin --push1b. Create the Railway service
# from inside the kje-mcp dir
railway init # name it: kje-mcp
railway link # link to the new project
railway up # builds + deploysOr via the dashboard: New Project → Deploy from GitHub Repo → jharriGH/kje-mcp. Railway auto-detects Nixpacks Python and uses railway.toml's startCommand.
1c. Set environment variables
railway variables --set BRAIN_KEY=jim-brain-kje-2026-kingjames
railway variables --set BRAIN_URL=https://jim-brain-production.up.railway.app
railway variables --set MCP_AUTH_KEY=jim-brain-kje-2026-kingjames
# OPTIONAL — only set these once the VPS dispatcher is up (step 2):
# railway variables --set VPS_DISPATCH_URL=https://cc.kj.empire/dispatch
# railway variables --set VPS_DISPATCH_KEY=<long-random-string>Per ENV VAR AUTOMATION RULE, Claude Code drives this — Jim never clicks through Railway dashboards.
1d. Verify the deploy
# replace <host> with the Railway-assigned URL (railway domain or your custom one)
curl -s https://<host>/health | jq
# expect: {"status":"ok","service":"kje-mcp",...,"vps_dispatch_configured":false,...}
# initialize handshake (Claude.ai will send the same)
curl -s -X POST https://<host>/mcp/ \
-H "Authorization: Bearer jim-brain-kje-2026-kingjames" \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-03-26","capabilities":{},"clientInfo":{"name":"curl","version":"1"}}}' | jq
# list tools
curl -s -X POST https://<host>/mcp/ \
-H "Authorization: Bearer jim-brain-kje-2026-kingjames" \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-d '{"jsonrpc":"2.0","id":2,"method":"tools/list"}' | jq '.result.tools[].name'
# expect 7 names: brain_status, brain_search, brain_get_project, brain_vault_search,
# brain_log, brain_memory, cc_dispatch2. Connect from Claude.ai web
Open claude.ai → Settings → Connectors → Add custom connector.
Name:
KJE MCP.URL:
https://<your-railway-host>/mcp/(note the trailing slash and/mcp/path).Auth:
API key / Bearer token→ pastejim-brain-kje-2026-kingjames(or whatever you setMCP_AUTH_KEYto).Save. Claude.ai performs the MCP
initialize+tools/listhandshake; if the green check appears next to "KJE MCP", the 7 tools are available in chat.
In any new Claude.ai conversation:
Open the conversation's tool/connector menu and toggle KJE MCP on.
Start your prompt with the equivalent of
brain_session_start:"Use KJE MCP. Call
brain_statusandbrain_get_projectforkj_codedeck(depth=standard) before answering. Then [task]."
When you wrap up, Claude.ai will run the closing ritual via the same connector — brain_memory for the summary, brain_log for progress, and cc_dispatch if any work needs to keep building on the VPS.
3. (Optional) Stand up the VPS dispatcher for cc_dispatch
Without this, cc_dispatch returns a clean cc_dispatch_not_configured error and the other six tools work fine. Set it up when you actually want Claude.ai to launch CC sessions on the VPS.
3a. Sync the vps/ folder onto 104.223.120.21
# from your laptop, in the kje-mcp repo
scp -r vps/ jim@104.223.120.21:/home/jim/kje-mcp/3b. Create a venv and install deps on the VPS
ssh jim@104.223.120.21
cd /home/jim/kje-mcp
python3 -m venv .venv
.venv/bin/pip install -r vps/requirements.txt3c. Configure .env
cat > /home/jim/kje-mcp/vps/.env <<'EOF'
VPS_DISPATCH_KEY=<long-random-string-paste-same-on-railway>
BRAIN_URL=https://jim-brain-production.up.railway.app
BRAIN_KEY=jim-brain-kje-2026-kingjames
PROJECT_REPOS_BASE=/home/jim/repos
# Optional overrides for slugs whose repo dir name differs from the slug:
# PROJECT_REPOS_OVERRIDES={"kj_autonomous":"/home/jim/n8n-canvas/kj-autonomous"}
CLAUDE_BIN=/usr/local/bin/claude
DISPATCH_LOG_DIR=/var/log/kje-cc-sessions
EOF
sudo mkdir -p /var/log/kje-cc-sessions
sudo chown jim:jim /var/log/kje-cc-sessions3d. Install + start the systemd unit
sudo cp /home/jim/kje-mcp/vps/kje-cc-dispatch.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable --now kje-cc-dispatch
sudo systemctl status kje-cc-dispatch
curl -s http://127.0.0.1:8088/health3e. Expose with TLS (nginx + Let's Encrypt)
# /etc/nginx/sites-available/cc.kj.empire
server {
listen 443 ssl http2;
server_name cc.kj.empire;
ssl_certificate /etc/letsencrypt/live/cc.kj.empire/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/cc.kj.empire/privkey.pem;
client_max_body_size 4m;
proxy_read_timeout 65s;
location /dispatch {
proxy_pass http://127.0.0.1:8088/dispatch;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Dispatch-Key $http_x_dispatch_key;
}
location /health {
proxy_pass http://127.0.0.1:8088/health;
}
}sudo ln -s /etc/nginx/sites-available/cc.kj.empire /etc/nginx/sites-enabled/
sudo certbot --nginx -d cc.kj.empire
sudo nginx -t && sudo systemctl reload nginx3f. Wire the kje-mcp service to it
# from the kje-mcp repo on your laptop
railway variables --set VPS_DISPATCH_URL=https://cc.kj.empire/dispatch
railway variables --set VPS_DISPATCH_KEY=<same-random-string>
railway redeploy/health on the kje-mcp service should now show "vps_dispatch_configured": true.
How a Claude.ai session uses this end-to-end
Session start. Connector toggled on. Claude.ai calls
brain_status→ empire context, thenbrain_get_project("kj_codedeck", "standard")→ injection prompt.Working. Claude.ai answers from in-chat reasoning. Mid-stream, it calls
brain_searchfor prior decisions orbrain_vault_searchfor credentials it needs to discuss.Heavy lifting. When the work needs a real CC build session, Claude.ai calls
cc_dispatch(project="kj_codedeck", prompt="<full build prompt>"). The VPS dispatcher acks immediately with asession_id. The CC session runs in the background.Hand-off. When the CC session finishes, the VPS dispatcher posts a hand-off to Brain
/codedeck/handoff(fast log + project next_action update + build card if 3+ files touched + queued semantic memory). The next Claude.ai session sees it viabrain_search.Closing ritual. Claude.ai calls
brain_memory(full summary),brain_log(progress event), and the build-card save happens automatically server-side via the hand-off. No asking Jim "which option?" — that's the rule.
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/jharriGH/kje-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server