Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Digital Brain MCPRemember that the EBR system uses Azure Functions for the API layer"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
π§ Digital Brain MCP
A Second Brain powered by Model Context Protocol (MCP), Google Gemini Embedding 2, and Supabase pgvector β deployed on Vercel.
Connect any MCP-compatible AI client (Claude, Cursor, OpenCode, Copilot, etc.) and give it persistent long-term memory. Store notes, code, research, decisions, and any knowledge β then recall it instantly with semantic search.
Architecture
AI Client (Claude / Cursor / OpenCode / Copilot)
β
βΌ MCP Protocol (Streamable HTTP + SSE)
β Authorization: Bearer <api-key>
ββββββββββββββββββββββββββββββββ
β Vercel (Next.js) β
β /api/mcp/[transport] β
β β
β βββ Auth Middleware βββ β
β β Bearer token check β β
β βββββββββββββββββββββββ β
β β
β Tools: β
β β’ store_memory β
β β’ search_memory β
β β’ list_memories β
β β’ update_memory β
β β’ delete_memory β
β β’ get_stats β
ββββββββββββ¬ββββββββββββββββββββ
β
βββββββ΄ββββββ
βΌ βΌ
βββββββββββ ββββββββββββββββ
β Gemini β β Supabase β
β Embed 2 β β PostgreSQL β
β API β β + pgvector β
βββββββββββ β vector(768) β
ββββββββββββββββHow It Works
You say (in Claude/Cursor/etc): "Remember that the EBR system uses Azure Functions for the API layer"
MCP client calls your Digital Brain's
store_memorytoolGemini Embedding 2 converts the text into a 768-dimension vector
Supabase stores the text + vector in PostgreSQL with pgvector
Later, you ask: "What tech does the EBR system use?"
search_memoryembeds your query, runs cosine similarity search, returns the matching memory
Security Model
The server uses Bearer token authentication on every request:
Fail-closed: If no API keys are configured, ALL requests are rejected
Multi-key support: Set multiple comma-separated keys in
DIGITAL_BRAIN_API_KEYSso each client gets its own key (and you can rotate independently)Row Level Security (RLS): Enabled on the Supabase
memoriestable β onlyservice_rolecan access data. The anon key has zero access.Service Role Key: Only stored server-side in Vercel env vars, never exposed to clients
Generating API Keys
# Generate a strong 256-bit key
openssl rand -hex 32Tech Stack
Component | Technology | Purpose |
Embeddings | Gemini Embedding 2 ( | Multimodal embeddings β text, images, audio, video, PDF all in one vector space |
Vector DB | Supabase + pgvector | PostgreSQL with vector similarity search (HNSW index, cosine distance) |
MCP Server | Next.js + | Exposes tools via MCP protocol with SSE transport |
Hosting | Vercel | Serverless deployment, auto-scaling, scale-to-zero |
Session Store | Upstash Redis (via Vercel KV) | Redis-backed SSE session management |
Auth | Bearer token middleware | API key validation on every request |
Why 768 dimensions?
Gemini Embedding 2 outputs 3072 dimensions by default but supports Matryoshka Representation Learning (MRL) β you can truncate to 768 with minimal quality loss. This saves ~75% storage and makes queries significantly faster, which matters a lot more for a personal knowledge base than that last fraction of accuracy.
MCP Tools Reference
store_memory
Save a new piece of knowledge to the Digital Brain.
Parameter | Type | Required | Description |
| string | β | The text content to store |
| string | Where it came from (e.g. | |
| string[] | Tags for categorization (e.g. | |
| enum |
| |
| object | Arbitrary structured metadata |
search_memory
Semantic search across everything stored. Your query is embedded and matched by cosine similarity.
Parameter | Type | Required | Description |
| string | β | Natural language search query |
| number | Max results (default 10, max 50) | |
| number | Minimum similarity 0β1 (default 0.4) | |
| string[] | Only return memories with at least one matching tag |
list_memories
Browse memories with optional filters (no embedding needed).
Parameter | Type | Required | Description |
| string | Filter by type | |
| string[] | Filter by tags | |
| number | Max results (default 20, max 100) | |
| number | Pagination offset |
update_memory
Modify an existing memory. If content changes, a new embedding is generated automatically.
Parameter | Type | Required | Description |
| number | β | Memory ID (from search/list results) |
| string | New content (re-embeds automatically) | |
| string[] | Replace tags | |
| string | Update source | |
| object | Replace metadata |
delete_memory
Permanently remove a memory by ID.
Parameter | Type | Required | Description |
| number | β | Memory ID to delete |
get_stats
Get brain statistics: total count, breakdown by type, and top tags.
No parameters.
Setup Guide
Prerequisites
Node.js 18+
A Supabase account (free tier works)
A Google AI Studio API key (free tier)
A Vercel account (free Hobby plan works)
Step 1: Clone the Repo
git clone https://github.com/YOUR_USERNAME/digital-brain-mcp.git
cd digital-brain-mcp
npm installStep 2: Set Up Supabase
Create a new Supabase project (or use an existing one)
Go to SQL Editor in the Supabase dashboard
Copy the contents of
supabase/migrations/001_create_memories.sqlPaste and run the entire SQL script
This creates: the
memoriestable, pgvector extension, HNSW index, search functions, RLS policies, and stat helpers
Get your credentials from Supabase β Settings β API:
SUPABASE_URLβ the Project URLSUPABASE_SERVICE_ROLE_KEYβ theservice_rolesecret (NOT the anon key)
Step 3: Get a Gemini API Key
Go to Google AI Studio
Create a new API key
Save it as
GEMINI_API_KEY
Step 4: Generate Your MCP API Key
openssl rand -hex 32Save the output as DIGITAL_BRAIN_API_KEYS.
Step 5: Local Development
# Create .env.local with your keys
cp .env.example .env.local
# Edit .env.local with your actual values
# Start the dev server
npm run devThe MCP endpoint will be at http://localhost:3000/api/mcp/sse.
Step 6: Deploy to Vercel
Push the repo to GitHub
Import the project in Vercel
Set environment variables in Vercel dashboard:
DIGITAL_BRAIN_API_KEYSβ your generated key(s)GEMINI_API_KEYβ your Google AI keySUPABASE_URLβ your Supabase project URLSUPABASE_SERVICE_ROLE_KEYβ your Supabase service role key
Create a KV (Redis) store: Vercel dashboard β Storage β Create KV Database
This auto-sets
REDIS_URL
Set a firewall bypass for MCP: Settings β Security β Firewall β Add rule:
Condition: "Request path contains
/api/mcp"Action: "Bypass"
Deploy!
Your production MCP endpoint: https://digital-brain-mcp.vercel.app/api/mcp/sse
Connecting AI Clients
Claude Desktop / Claude Code
Add to your Claude MCP config (~/.claude/claude_desktop_config.json or project .mcp.json):
{
"mcpServers": {
"digital-brain": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://digital-brain-mcp.vercel.app/api/mcp/sse",
"--header",
"Authorization:Bearer YOUR_API_KEY_HERE"
]
}
}
}Cursor
Go to Settings β Cursor Settings β Tools & MCP β Add Server:
Type: SSE
URL:
https://digital-brain-mcp.vercel.app/api/mcp/sseHeaders:
Authorization: Bearer YOUR_API_KEY_HERE
OpenCode / Any MCP Client
Use the SSE endpoint https://digital-brain-mcp.vercel.app/api/mcp/sse with an Authorization: Bearer <key> header.
Perplexity / Computer
Connect via the MCP config pattern above, or access the Supabase database directly through an existing connector.
Project Structure
digital-brain-mcp/
βββ src/
β βββ app/
β β βββ api/
β β β βββ mcp/
β β β βββ [transport]/
β β β βββ route.ts β MCP endpoint (tools + auth)
β β βββ layout.tsx β Root layout
β β βββ page.tsx β Landing page
β βββ lib/
β βββ auth.ts β Bearer token authentication
β βββ embeddings.ts β Gemini Embedding 2 client
β βββ supabase.ts β Supabase client + data helpers
βββ supabase/
β βββ migrations/
β βββ 001_create_memories.sql β Full database schema
βββ .env.example β Template for environment variables
βββ .mcp.json β MCP client connection config
βββ package.json
βββ tsconfig.json
βββ next.config.js
βββ README.md β This fileExample Usage
Once connected, you can say things like:
"Remember that the Revvity Signals API uses OAuth 2.0 client credentials flow" β Calls
store_memorywith appropriate tags"What do I know about authentication patterns?" β Calls
search_memory, finds semantically related memories"Show me all my code snippets" β Calls
list_memorieswithcontent_type: "code""How many memories do I have?" β Calls
get_stats
Cost Estimate
Service | Free Tier | Paid Threshold |
Supabase | 500 MB database, 1 GB storage | ~650K memories at 768d before hitting limit |
Vercel | Hobby plan (100 GB bandwidth) | Heavy team usage |
Gemini API | Generous free quota | Thousands of embeddings/day |
Upstash Redis | 10K commands/day | Heavy concurrent sessions |
For personal second-brain use, everything stays well within free tiers.
Future Enhancements
Multimodal storage: Store images/PDFs directly (Gemini Embedding 2 supports them natively)
Auto-tagging: Use an LLM to suggest tags for new memories
Bulk import: CLI tool to import from Obsidian, Notion, or markdown files
Scheduled embedding refresh: Re-embed old memories when the model improves
Multi-user support: Add user_id column and JWT auth for shared deployments
License
MIT
This server cannot be installed
Resources
Looking for Admin?
Admins can modify the Dockerfile, update the server description, and track usage metrics. If you are the server author, to access the admin panel.