Skip to main content
Glama
remember-md

@remember-md/mcp

by remember-md

@remember-md/mcp

Local MCP server for the Remember.md second brain. Run via npx, point any MCP client at it, query your markdown brain semantically.

Status: v0.0.1 — skeleton. Not yet functional. Active development.

What it does

Exposes your local markdown brain (a folder of .md files organised PARA-style by the Remember.md plugin) as a set of MCP tools any MCP client can call — Claude Code, OpenClaw, Cursor, Codex CLI, Claude.ai web, ChatGPT custom GPTs, anything that speaks the Model Context Protocol.

Planned tools:

  • search_brain(query, top_k) — semantic + BM25 hybrid + wikilink-expand

  • get_file(path) — read a brain file

  • list_recent(period, kind?) — recent journal / notes / decisions

  • query_persona() — current Persona.md content

  • dashboard_snapshot() — counts + top beliefs + active projects

  • propose_belief(claim, evidence) — write candidate to Inbox/

How it works

  • Storage: node:sqlite (Node 22.5+ stdlib) + sqlite-vec extension for vector search + FTS5 for BM25 — no server, no native compilation, no toolchain.

  • Embeddings: @huggingface/transformers running quantized Xenova/bge-micro-v2 (384d, ~17 MB) locally — no cloud calls.

  • Sync: on-demand mtime + content-hash incremental reindex at query time. The brain (markdown) is the source of truth; the index in .remember/index.db is rebuildable.

  • Graceful degradation: if vector loads fail, falls back to FTS5-only; if both fail, falls back to ripgrep.

Install

You don't install it. Point your MCP client at it via npx:

Claude Code (via the Remember.md plugin's /remember:init)

The Remember.md plugin automatically configures Claude Code's MCP layer to launch this server. Just run /remember:init.

Cursor / Codex / other MCP clients

Add to your MCP config:

{
  "mcpServers": {
    "remember": {
      "command": "npx",
      "args": ["-y", "@remember-md/mcp"],
      "env": {
        "REMEMBER_BRAIN_PATH": "/absolute/path/to/your/brain"
      }
    }
  }
}

First run downloads the package (~15–30s) and the embedding model (~17 MB, one-time). After that, queries are sub-second.

Configuration

Env var

Default

Purpose

REMEMBER_BRAIN_PATH

~/remember

Brain root directory (folder of markdown files)

REMEMBER_INDEX_DIR

${brain}/.remember

Where the SQLite index lives

REMEMBER_EMBEDDING_MODEL

Xenova/bge-micro-v2

Hugging Face model id

REMEMBER_TIER

auto

auto / vec / fts5 / ripgrep (force a fallback tier)

Privacy

Local-only. No cloud calls. No telemetry. The brain folder + index never leave your machine. Embedding model runs in-process via ONNX Runtime.

License

MIT — see LICENSE.

A
license - permissive license
-
quality - not tested
C
maintenance

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/remember-md/mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server