Skip to main content
Glama

🧠 Digital Brain MCP

A Second Brain powered by Model Context Protocol (MCP), Google Gemini Embedding 2, and Supabase pgvector β€” deployed on Vercel.

Connect any MCP-compatible AI client (Claude, Cursor, OpenCode, Copilot, etc.) and give it persistent long-term memory. Store notes, code, research, decisions, and any knowledge β€” then recall it instantly with semantic search.


Architecture

AI Client (Claude / Cursor / OpenCode / Copilot)
        β”‚
        β–Ό  MCP Protocol (Streamable HTTP + SSE)
        β”‚  Authorization: Bearer <api-key>
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Vercel (Next.js)           β”‚
β”‚   /api/mcp/[transport]       β”‚
β”‚                              β”‚
β”‚   β”Œβ”€β”€ Auth Middleware ──┐    β”‚
β”‚   β”‚  Bearer token check β”‚    β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
β”‚                              β”‚
β”‚   Tools:                     β”‚
β”‚    β€’ store_memory            β”‚
β”‚    β€’ search_memory           β”‚
β”‚    β€’ list_memories           β”‚
β”‚    β€’ update_memory           β”‚
β”‚    β€’ delete_memory           β”‚
β”‚    β€’ get_stats               β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β”‚
     β”Œβ”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”
     β–Ό           β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Gemini  β”‚  β”‚  Supabase    β”‚
β”‚ Embed 2 β”‚  β”‚  PostgreSQL  β”‚
β”‚  API    β”‚  β”‚  + pgvector  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚  vector(768) β”‚
             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

How It Works

  1. You say (in Claude/Cursor/etc): "Remember that the EBR system uses Azure Functions for the API layer"

  2. MCP client calls your Digital Brain's store_memory tool

  3. Gemini Embedding 2 converts the text into a 768-dimension vector

  4. Supabase stores the text + vector in PostgreSQL with pgvector

  5. Later, you ask: "What tech does the EBR system use?"

  6. search_memory embeds your query, runs cosine similarity search, returns the matching memory


Security Model

The server uses Bearer token authentication on every request:

  • Fail-closed: If no API keys are configured, ALL requests are rejected

  • Multi-key support: Set multiple comma-separated keys in DIGITAL_BRAIN_API_KEYS so each client gets its own key (and you can rotate independently)

  • Row Level Security (RLS): Enabled on the Supabase memories table β€” only service_role can access data. The anon key has zero access.

  • Service Role Key: Only stored server-side in Vercel env vars, never exposed to clients

Generating API Keys

# Generate a strong 256-bit key
openssl rand -hex 32

Tech Stack

Component

Technology

Purpose

Embeddings

Gemini Embedding 2 (gemini-embedding-2-preview)

Multimodal embeddings β€” text, images, audio, video, PDF all in one vector space

Vector DB

Supabase + pgvector

PostgreSQL with vector similarity search (HNSW index, cosine distance)

MCP Server

Next.js + mcp-handler

Exposes tools via MCP protocol with SSE transport

Hosting

Vercel

Serverless deployment, auto-scaling, scale-to-zero

Session Store

Upstash Redis (via Vercel KV)

Redis-backed SSE session management

Auth

Bearer token middleware

API key validation on every request

Why 768 dimensions?

Gemini Embedding 2 outputs 3072 dimensions by default but supports Matryoshka Representation Learning (MRL) β€” you can truncate to 768 with minimal quality loss. This saves ~75% storage and makes queries significantly faster, which matters a lot more for a personal knowledge base than that last fraction of accuracy.


MCP Tools Reference

store_memory

Save a new piece of knowledge to the Digital Brain.

Parameter

Type

Required

Description

content

string

βœ…

The text content to store

source

string

Where it came from (e.g. "conversation", "web-research", a URL)

tags

string[]

Tags for categorization (e.g. ["work", "azure", "ebr"])

content_type

enum

text, note, code, conversation, research, decision, reference

metadata

object

Arbitrary structured metadata

search_memory

Semantic search across everything stored. Your query is embedded and matched by cosine similarity.

Parameter

Type

Required

Description

query

string

βœ…

Natural language search query

limit

number

Max results (default 10, max 50)

threshold

number

Minimum similarity 0–1 (default 0.4)

filter_tags

string[]

Only return memories with at least one matching tag

list_memories

Browse memories with optional filters (no embedding needed).

Parameter

Type

Required

Description

content_type

string

Filter by type

tags

string[]

Filter by tags

limit

number

Max results (default 20, max 100)

offset

number

Pagination offset

update_memory

Modify an existing memory. If content changes, a new embedding is generated automatically.

Parameter

Type

Required

Description

id

number

βœ…

Memory ID (from search/list results)

content

string

New content (re-embeds automatically)

tags

string[]

Replace tags

source

string

Update source

metadata

object

Replace metadata

delete_memory

Permanently remove a memory by ID.

Parameter

Type

Required

Description

id

number

βœ…

Memory ID to delete

get_stats

Get brain statistics: total count, breakdown by type, and top tags.

No parameters.


Setup Guide

Prerequisites

Step 1: Clone the Repo

git clone https://github.com/YOUR_USERNAME/digital-brain-mcp.git
cd digital-brain-mcp
npm install

Step 2: Set Up Supabase

  1. Create a new Supabase project (or use an existing one)

  2. Go to SQL Editor in the Supabase dashboard

  3. Copy the contents of supabase/migrations/001_create_memories.sql

  4. Paste and run the entire SQL script

  5. This creates: the memories table, pgvector extension, HNSW index, search functions, RLS policies, and stat helpers

Get your credentials from Supabase β†’ Settings β†’ API:

  • SUPABASE_URL β€” the Project URL

  • SUPABASE_SERVICE_ROLE_KEY β€” the service_role secret (NOT the anon key)

Step 3: Get a Gemini API Key

  1. Go to Google AI Studio

  2. Create a new API key

  3. Save it as GEMINI_API_KEY

Step 4: Generate Your MCP API Key

openssl rand -hex 32

Save the output as DIGITAL_BRAIN_API_KEYS.

Step 5: Local Development

# Create .env.local with your keys
cp .env.example .env.local
# Edit .env.local with your actual values

# Start the dev server
npm run dev

The MCP endpoint will be at http://localhost:3000/api/mcp/sse.

Step 6: Deploy to Vercel

  1. Push the repo to GitHub

  2. Import the project in Vercel

  3. Set environment variables in Vercel dashboard:

    • DIGITAL_BRAIN_API_KEYS β€” your generated key(s)

    • GEMINI_API_KEY β€” your Google AI key

    • SUPABASE_URL β€” your Supabase project URL

    • SUPABASE_SERVICE_ROLE_KEY β€” your Supabase service role key

  4. Create a KV (Redis) store: Vercel dashboard β†’ Storage β†’ Create KV Database

    • This auto-sets REDIS_URL

  5. Set a firewall bypass for MCP: Settings β†’ Security β†’ Firewall β†’ Add rule:

    • Condition: "Request path contains /api/mcp"

    • Action: "Bypass"

  6. Deploy!

Your production MCP endpoint: https://digital-brain-mcp.vercel.app/api/mcp/sse


Connecting AI Clients

Claude Desktop / Claude Code

Add to your Claude MCP config (~/.claude/claude_desktop_config.json or project .mcp.json):

{
  "mcpServers": {
    "digital-brain": {
      "type": "stdio",
      "command": "npx",
      "args": [
        "-y",
        "mcp-remote",
        "https://digital-brain-mcp.vercel.app/api/mcp/sse",
        "--header",
        "Authorization:Bearer YOUR_API_KEY_HERE"
      ]
    }
  }
}

Cursor

Go to Settings β†’ Cursor Settings β†’ Tools & MCP β†’ Add Server:

  • Type: SSE

  • URL: https://digital-brain-mcp.vercel.app/api/mcp/sse

  • Headers: Authorization: Bearer YOUR_API_KEY_HERE

OpenCode / Any MCP Client

Use the SSE endpoint https://digital-brain-mcp.vercel.app/api/mcp/sse with an Authorization: Bearer <key> header.

Perplexity / Computer

Connect via the MCP config pattern above, or access the Supabase database directly through an existing connector.


Project Structure

digital-brain-mcp/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ api/
β”‚   β”‚   β”‚   └── mcp/
β”‚   β”‚   β”‚       └── [transport]/
β”‚   β”‚   β”‚           └── route.ts    ← MCP endpoint (tools + auth)
β”‚   β”‚   β”œβ”€β”€ layout.tsx              ← Root layout
β”‚   β”‚   └── page.tsx                ← Landing page
β”‚   └── lib/
β”‚       β”œβ”€β”€ auth.ts                 ← Bearer token authentication
β”‚       β”œβ”€β”€ embeddings.ts           ← Gemini Embedding 2 client
β”‚       └── supabase.ts             ← Supabase client + data helpers
β”œβ”€β”€ supabase/
β”‚   └── migrations/
β”‚       └── 001_create_memories.sql ← Full database schema
β”œβ”€β”€ .env.example                    ← Template for environment variables
β”œβ”€β”€ .mcp.json                       ← MCP client connection config
β”œβ”€β”€ package.json
β”œβ”€β”€ tsconfig.json
β”œβ”€β”€ next.config.js
└── README.md                       ← This file

Example Usage

Once connected, you can say things like:

  • "Remember that the Revvity Signals API uses OAuth 2.0 client credentials flow" β†’ Calls store_memory with appropriate tags

  • "What do I know about authentication patterns?" β†’ Calls search_memory, finds semantically related memories

  • "Show me all my code snippets" β†’ Calls list_memories with content_type: "code"

  • "How many memories do I have?" β†’ Calls get_stats


Cost Estimate

Service

Free Tier

Paid Threshold

Supabase

500 MB database, 1 GB storage

~650K memories at 768d before hitting limit

Vercel

Hobby plan (100 GB bandwidth)

Heavy team usage

Gemini API

Generous free quota

Thousands of embeddings/day

Upstash Redis

10K commands/day

Heavy concurrent sessions

For personal second-brain use, everything stays well within free tiers.


Future Enhancements

  • Multimodal storage: Store images/PDFs directly (Gemini Embedding 2 supports them natively)

  • Auto-tagging: Use an LLM to suggest tags for new memories

  • Bulk import: CLI tool to import from Obsidian, Notion, or markdown files

  • Scheduled embedding refresh: Re-embed old memories when the model improves

  • Multi-user support: Add user_id column and JWT auth for shared deployments


License

MIT

-
security - not tested
F
license - not found
-
quality - not tested

Resources

Looking for Admin?

Admins can modify the Dockerfile, update the server description, and track usage metrics. If you are the server author, to access the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dswillden/digital-brain-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server