Digital Brain MCP
Utilizes Gemini Embedding 2 to generate multimodal embeddings for text, images, PDFs, audio, and video, enabling unified cross-modal semantic search.
Integrates with Supabase to provide vector storage via pgvector and secure multimodal file management in private storage buckets.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Digital Brain MCPRemember that the EBR system uses Azure Functions for the API layer"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
š§ Digital Brain MCP
A Second Brain powered by Model Context Protocol (MCP), Google Gemini Embedding 2, and Supabase pgvector ā deployed on Vercel.
Connect any MCP-compatible AI client (Claude, Cursor, OpenCode, Copilot, etc.) and give it persistent long-term memory. Store text, images, PDFs, audio, and video ā all embedded in a unified vector space for cross-modal semantic search.
Architecture
AI Client (Claude / Cursor / OpenCode / Copilot)
ā
ā¼ MCP Protocol (Streamable HTTP + SSE)
ā Authorization: Bearer <api-key>
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā Vercel (Next.js) ā
ā /api/mcp/[transport] ā
ā ā
ā āāā Auth Middleware āāā ā
ā ā Bearer token check ā ā
ā āāāāāāāāāāāāāāāāāāāāāāā ā
ā ā
ā Tools: ā
ā ⢠store_memory (text) ā
ā ⢠store_file (base64 upload) ā
ā ⢠store_file_from_url (URL fetch) ā
ā ⢠search_memory (cross-modal) ā
ā ⢠get_file_url (signed download) ā
ā ⢠list_memories ā
ā ⢠update_memory ā
ā ⢠delete_memory ā
ā ⢠get_stats ā
ā ā
ā REST Endpoint: ā
ā ⢠POST /api/upload (direct file) ā
āāāāāāāāāāāā¬āāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāā
ā ā
āāāāāāā“āāāāāā āāāāā“āāāāāāāāāāāāāāā
ā¼ ā¼ ā¼ ā¼
āāāāāāāāāāā āāāāāāāāāāāāāāāā āāāāāāāāāāāāā
ā Gemini ā ā Supabase ā ā Supabase ā
ā Embed 2 ā ā PostgreSQL ā ā Storage ā
ā API ā ā + pgvector ā ā (files) ā
ā ā ā vector(768) ā ā ā
āāāāāāāāāāā āāāāāāāāāāāāāāāā āāāāāāāāāāāāāMultimodal Embedding
Gemini Embedding 2 maps all modalities into the same 768-dimension vector space. This means:
A text query like "architecture diagram" can find a stored PNG image
Searching for "meeting notes" can return an audio recording of a meeting
A PDF of a research paper and a text summary live side by side in the same search space
Supported File Types
Modality | MIME Types | Limits |
Image |
| Up to 6 per request |
| Up to 6 pages | |
Audio |
| ā |
Video |
| Up to 120 seconds |
Interleaved Embedding
When you provide a description alongside a file, the system creates an interleaved embedding ā a single vector that captures both the visual/audio content AND your text description. This produces significantly richer search results compared to embedding the file alone.
How It Works
You say (in Claude/Cursor/etc): "Remember that the EBR system uses Azure Functions for the API layer"
MCP client calls your Digital Brain's
store_memorytoolGemini Embedding 2 converts the text into a 768-dimension vector
Supabase stores the text + vector in PostgreSQL with pgvector
Later, you ask: "What tech does the EBR system use?"
search_memoryembeds your query, runs cosine similarity search, returns the matching memory
For files, the flow is the same ā except the file bytes are sent to Gemini for multimodal embedding, and the raw file is stored in Supabase Storage with a signed download URL generated on retrieval.
Security Model
The server uses Bearer token authentication on every request:
Fail-closed: If no API keys are configured, ALL requests are rejected
Multi-key support: Set multiple comma-separated keys in
DIGITAL_BRAIN_API_KEYSso each client gets its own key (and you can rotate independently)Row Level Security (RLS): Enabled on the Supabase
memoriestable ā onlyservice_rolecan access data. The anon key has zero access.Service Role Key: Only stored server-side in Vercel env vars, never exposed to clients
Private Storage: The
brain-filesbucket is private ā files are only accessible via time-limited signed URLs (1 hour expiry)
Generating API Keys
# Generate a strong 256-bit key
openssl rand -hex 32Tech Stack
Component | Technology | Purpose |
Embeddings | Gemini Embedding 2 ( | Multimodal embeddings ā text, images, audio, video, PDF all in one vector space |
Vector DB | Supabase + pgvector | PostgreSQL with vector similarity search (HNSW index, cosine distance) |
File Storage | Supabase Storage | Private bucket for images, PDFs, audio, video with signed URL access |
MCP Server | Next.js + | Exposes tools via MCP protocol with SSE transport |
Hosting | Vercel | Serverless deployment, auto-scaling, scale-to-zero |
Session Store | Upstash Redis (via Vercel KV) | Redis-backed SSE session management |
Auth | Bearer token middleware | API key validation on every request |
Why 768 dimensions?
Gemini Embedding 2 outputs 3072 dimensions by default but supports Matryoshka Representation Learning (MRL) ā you can truncate to 768 with minimal quality loss. This saves ~75% storage and makes queries significantly faster, which matters a lot more for a personal knowledge base than that last fraction of accuracy.
MCP Tools Reference
store_memory
Save text-based knowledge to the Digital Brain.
Parameter | Type | Required | Description |
| string | ā | The text content to store |
| string | Where it came from (e.g. | |
| string[] | Tags for categorization (e.g. | |
| enum |
| |
| object | Arbitrary structured metadata |
store_file
Store an image, PDF, audio, or video file via base64-encoded data. The file is embedded with Gemini Embedding 2 in the same vector space as text memories.
Parameter | Type | Required | Description |
| string | ā | Base64-encoded file content |
| string | ā | Original filename with extension (e.g. |
| string | ā | MIME type (see Supported File Types above) |
| string | Text description ā creates a richer interleaved embedding. Highly recommended. | |
| string | Source attribution | |
| string[] | Tags for categorization | |
| object | Arbitrary structured metadata |
store_file_from_url
Fetch a file from a URL and store it with a multimodal embedding. Downloads the file, embeds it, and saves to Supabase Storage.
Parameter | Type | Required | Description |
| string | ā | URL of the file to download |
| string | Text description for interleaved embedding | |
| string | Override filename (derived from URL if omitted) | |
| string | Source attribution (defaults to the URL) | |
| string[] | Tags for categorization | |
| object | Arbitrary structured metadata |
search_memory
Semantic search across ALL modalities ā text, images, PDFs, audio, video. Your text query is embedded and matched against everything in the brain.
Parameter | Type | Required | Description |
| string | ā | Natural language search query |
| number | Max results (default 10, max 50) | |
| number | Minimum similarity 0ā1 (default 0.4) | |
| string[] | Only return memories with at least one matching tag | |
| enum | Filter by type: |
File-based results include file_name, file_mime_type, file_size_bytes, and a signed file_url for download.
get_file_url
Get a temporary signed download URL for a stored file (valid 1 hour).
Parameter | Type | Required | Description |
| number | ā | The memory ID that has a file attached |
list_memories
Browse memories with optional filters. Includes both text and file-based memories.
Parameter | Type | Required | Description |
| enum | Filter by type (includes | |
| string[] | Filter by tags | |
| number | Max results (default 20, max 100) | |
| number | Pagination offset |
update_memory
Modify an existing memory. If content changes, a new embedding is generated automatically.
Parameter | Type | Required | Description |
| number | ā | Memory ID (from search/list results) |
| string | New content (re-embeds automatically) | |
| string[] | Replace tags | |
| string | Update source | |
| object | Replace metadata |
delete_memory
Permanently remove a memory by ID. If it has a file, the file is also deleted from Supabase Storage.
Parameter | Type | Required | Description |
| number | ā | Memory ID to delete |
get_stats
Get brain statistics: total count, breakdown by content type (including file types), and top tags.
No parameters.
Setup Guide
Prerequisites
Node.js 18+
A Supabase account (free tier works)
A Google AI Studio API key (free tier)
A Vercel account (free Hobby plan works)
Step 1: Clone the Repo
git clone https://github.com/dswillden/digital-brain-mcp.git
cd digital-brain-mcp
npm installStep 2: Set Up Supabase
Create a new Supabase project (or use an existing one)
Go to SQL Editor in the Supabase dashboard
Run
supabase/migrations/001_create_memories.sqlā creates the base schemaRun
supabase/migrations/002_multimodal_upgrade.sqlā adds file columns and updates search functions
Create the Storage Bucket:
Go to Storage in the Supabase dashboard
Click New bucket
Name:
brain-filesPublic bucket: OFF (keep it private)
File size limit: 50 MB (adjust as needed)
Click Create bucket
Get your credentials from Supabase ā Settings ā API:
SUPABASE_URLā the Project URLSUPABASE_SERVICE_ROLE_KEYā theservice_rolesecret (NOT the anon key)
Step 3: Get a Gemini API Key
Go to Google AI Studio
Create a new API key
Save it as
GEMINI_API_KEY
Step 4: Generate Your MCP API Key
openssl rand -hex 32Save the output as DIGITAL_BRAIN_API_KEYS.
Step 5: Local Development
# Create .env.local with your keys
cp .env.example .env.local
# Edit .env.local with your actual values
# Start the dev server
npm run devThe MCP endpoint will be at http://localhost:3000/api/mcp/sse.
Step 6: Deploy to Vercel
Push the repo to GitHub
Import the project in Vercel
Set environment variables in Vercel dashboard:
DIGITAL_BRAIN_API_KEYSā your generated key(s)GEMINI_API_KEYā your Google AI keySUPABASE_URLā your Supabase project URLSUPABASE_SERVICE_ROLE_KEYā your Supabase service role key
Create a KV (Redis) store: Vercel dashboard ā Storage ā Create KV Database
This auto-sets
REDIS_URL
Deploy!
Your production MCP endpoint: https://digital-brain-mcp.vercel.app/api/mcp/sse
Connecting AI Clients
Claude Desktop / Claude Code
Add to your Claude MCP config (~/.claude/claude_desktop_config.json or project .mcp.json):
{
"mcpServers": {
"digital-brain": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://digital-brain-mcp.vercel.app/api/mcp/sse",
"--header",
"Authorization:Bearer YOUR_API_KEY_HERE"
]
}
}
}Cursor
Go to Settings ā Cursor Settings ā Tools & MCP ā Add Server:
Type: SSE
URL:
https://digital-brain-mcp.vercel.app/api/mcp/sseHeaders:
Authorization: Bearer YOUR_API_KEY_HERE
OpenCode
Add to your OpenCode MCP config (.opencode/config.json or equivalent):
{
"mcp": {
"servers": {
"digital-brain": {
"type": "remote",
"url": "https://digital-brain-mcp.vercel.app/api/mcp/mcp",
"headers": {
"Authorization": "Bearer YOUR_API_KEY_HERE"
}
}
}
}
}Any Other MCP Client
Use the SSE endpoint https://digital-brain-mcp.vercel.app/api/mcp/sse with an Authorization: Bearer <key> header.
Project Structure
digital-brain-mcp/
āāā src/
ā āāā app/
ā ā āāā api/
ā ā ā āāā mcp/
ā ā ā āāā [transport]/
ā ā ā āāā route.ts ā MCP endpoint (9 tools + auth)
ā ā āāā layout.tsx ā Root layout
ā ā āāā page.tsx ā Landing page
ā ā āāā upload/
ā ā ā āāā route.ts ā Direct file upload endpoint (POST /api/upload)
ā āāā lib/
ā āāā embeddings.ts ā Gemini Embedding 2 multimodal client
ā āāā supabase.ts ā Supabase client + data helpers + file storage
āāā docs/
ā āāā setup-guide.md ā Step-by-step setup instructions
ā āāā technical-spec.md ā Detailed spec for AI agents to understand/recreate
ā āāā explainer.md ā Beginner-friendly guide with diagrams
āāā supabase/
ā āāā migrations/
ā āāā 001_create_memories.sql ā Base schema (text only)
ā āāā 002_multimodal_upgrade.sql ā File columns + updated functions
āāā .env.example ā Template for environment variables
āāā .mcp.json ā MCP client connection config
āāā package.json
āāā tsconfig.json
āāā next.config.js
āāā README.md ā This fileExample Usage
Once connected, you can say things like:
"Remember that the EBR system uses Azure Functions for the API layer" ā Calls
store_memorywith appropriate tags"Store this screenshot of the dashboard" (with image attached) ā Calls
store_filewith the image, creates a multimodal embedding"Save this PDF from https://example.com/report.pdf" ā Calls
store_file_from_url, downloads and embeds the PDFUpload a local file directly (from terminal):
curl -X POST https://digital-brain-mcp.vercel.app/api/upload \ -H "Authorization: Bearer YOUR_API_KEY" \ -F "file=@./diagram.png" \ -F "description=System architecture diagram" \ -F "tags=work,architecture""What do I know about authentication patterns?" ā Calls
search_memory, finds text AND image/PDF results across modalities"Show me all my stored images" ā Calls
list_memorieswithcontent_type: "image""Get the download link for memory #42" ā Calls
get_file_url, returns a signed URL valid for 1 hour"How many memories do I have?" ā Calls
get_stats, shows breakdown by type including file counts
Cost Estimate
Service | Free Tier | Paid Threshold |
Supabase | 500 MB database, 1 GB storage | ~650K text memories or ~1K large files before hitting limit |
Vercel | Hobby plan (100 GB bandwidth) | Heavy team usage |
Gemini API | Generous free quota | Thousands of embeddings/day |
Upstash Redis | 10K commands/day | Heavy concurrent sessions |
For personal second-brain use, everything stays well within free tiers.
Direct File Upload (REST API)
In addition to the MCP tools, there's a simple REST endpoint for uploading files directly from your terminal or any HTTP client ā no base64 encoding needed:
curl -X POST https://digital-brain-mcp.vercel.app/api/upload \
-H "Authorization: Bearer YOUR_API_KEY" \
-F "file=@/path/to/photo.jpg" \
-F "description=Team photo from Q1 offsite" \
-F "tags=team,photos" \
-F "source=manual-upload"Field | Required | Description |
| Yes | The file to upload (multipart form) |
| No | Text description ā improves search quality significantly |
| No | Comma-separated tags |
| No | Where it came from (defaults to "file-upload") |
| No | JSON string for extra structured data |
Your AI clients (Claude Code, Cursor, OpenCode) can also run this curl command on your behalf when you ask them to upload a local file.
Documentation
Detailed docs are in the docs/ folder:
Document | Audience | Description |
You | Step-by-step setup with full SQL, Vercel deploy, and client configs | |
AI agents | Exhaustive specification ā enough for an AI to understand, maintain, or recreate the system | |
Beginners | What embeddings, vectors, MCP, and Supabase are, with diagrams and analogies |
Future Enhancements
Auto-tagging: Use an LLM to suggest tags for new memories
Bulk import: CLI tool to import from Obsidian, Notion, or markdown files
Scheduled embedding refresh: Re-embed old memories when the model improves
Multi-user support: Add user_id column and JWT auth for shared deployments
OCR fallback: Extract text from images/PDFs for enhanced text search
License
MIT
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/dswillden/digital-brain-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server