Integrates with OpenAI embedding models to generate vector representations of knowledge for semantic search and natural language retrieval.
Supports using a PostgreSQL instance with the pgvector extension as a storage backend for semantic knowledge and vector similarity searches.
Provides support for using Supabase as a database backend to store and manage semantic memories, metadata, and vector embeddings.
Allows for the classification and retrieval of stored knowledge and memories specifically indexed from Telegram conversations.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Open Brain MCP Serversearch my work history for what we decided about the database migration"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Open Brain MCP Server
A personal semantic knowledge base exposed as MCP tools. Store, search, and retrieve memories using natural language across Cursor, Claude Desktop, or any MCP-compatible client.
Tools
Tool | Description |
| Semantic similarity search across all memories |
| Embed and store a new piece of knowledge |
| Filtered list retrieval by source, tags, or date — no embedding needed |
| Delete a memory by UUID |
| Counts and breakdown by source |
| Semantic search across the tool registry (Toolshed) |
| Index Cursor agent transcripts as searchable work history |
| Keyword search across raw Cursor transcript files |
Setup
cd mcp-server
npm install
cp .env.example .env
# edit .env with your credentialsConfiguration
All configuration is via environment variables in .env.
Required (always)
Variable | Description |
| Used to generate embeddings via OpenRouter |
Database backend
The server supports two database backends. Set DB_BACKEND to choose (default: supabase).
Supabase (default)
DB_BACKEND=supabase
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_ROLE_KEY=your_service_role_keyRaw Postgres
Point the server at any Postgres instance with the pgvector extension and the brain_memories schema applied.
DB_BACKEND=postgres
DATABASE_URL=postgresql://user:password@host:5432/dbnameBoth backends use the same schema and the same
match_memoriesSQL function. See Database Schema below.
Optional
Variable | Default | Description |
|
| OpenRouter embedding model |
|
| Must match the model output and schema |
|
| Port for the HTTP/SSE transport |
| — | Path to Cursor agent-transcripts directory; enables |
Running
stdio transport (Cursor / Claude Desktop)
npm run dev:stdio # development (tsx)
npm run start:stdio # production (compiled JS)Add to .cursor/mcp.json:
{
"mcpServers": {
"open-brain": {
"command": "npx",
"args": ["tsx", "/path/to/mcp-server/src/stdio.ts"],
"env": {
"DB_BACKEND": "supabase",
"SUPABASE_URL": "...",
"SUPABASE_SERVICE_ROLE_KEY": "...",
"OPENROUTER_API_KEY": "..."
}
}
}
}To use raw Postgres instead, swap the env block:
{
"env": {
"DB_BACKEND": "postgres",
"DATABASE_URL": "postgresql://user:pass@host:5432/dbname",
"OPENROUTER_API_KEY": "..."
}
}HTTP / SSE transport (network-accessible)
npm run dev:http # development
npm run start:http # productionEndpoints:
Endpoint | Description |
| SSE stream (MCP SSE transport) |
| MCP message handling |
| Health check |
Database Schema
Both backends require the following on the Postgres instance:
pgvectorextension (forhalfvectype)brain_memoriestablematch_memoriesSQL functionbrain_statsview
Schema is managed via the migrations in supabase/migrations/. For a raw Postgres instance, run the migration files in order against your database:
001_initial_schema.sql
002_open_brain.sql
003_brain_rls.sql
004_vector_halfvec.sql
005_uuid_default.sql
006_storage_fillfactor.sql
007_column_reorder.sqlbrain_memories table
CREATE TABLE brain_memories (
id uuid NOT NULL DEFAULT gen_random_uuid(),
created_at timestamptz DEFAULT NOW(),
updated_at timestamptz DEFAULT NOW(),
source text NOT NULL DEFAULT 'manual',
content text NOT NULL,
tags text[] DEFAULT '{}',
source_metadata jsonb DEFAULT '{}',
embedding halfvec(1536)
);Valid source values: manual, telegram, cursor, api, conversations, knowledge, work_history, toolshed.
Toolshed
The Toolshed (discover_tools) solves the "tool explosion" problem. Instead of injecting hundreds of MCP tool schemas into the agent context, the agent calls discover_tools with a natural language query and gets back only the tools relevant to the current task.
Tool descriptions are loaded from tool-registry.json and embedded into brain_memories (source toolshed) at startup. Indexing is idempotent.
Work History Indexing
When CURSOR_TRANSCRIPTS_DIR is set, two additional tools are enabled:
index_cursor_chats— reads JSONL transcript files from the directory, embeds each session summary, and stores it as awork_historymemory. Re-running is idempotent (already-indexed sessions are skipped).search_work_history— keyword search across raw transcript files for exact phrase matching. Complements the semanticsearch_brain.
CURSOR_TRANSCRIPTS_DIR=/Users/you/.cursor/projects/.../agent-transcriptsDevelopment
npm run build # compile TypeScript to dist/
npm run dev:stdio # run stdio server with tsx (hot reload)
npm run dev:http # run HTTP server with tsx (hot reload)This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.