Optionally integrates with OpenAI's embedding API to generate semantic embeddings for memories, enabling semantic search and similarity-based recall using models like text-embedding-3-small.
Provides SQLite-backed memory storage for MCP agents with optional FTS5 indexing, enabling persistent storage and retrieval of memories with text search capabilities.
@ideadesignmedia/memory-mcp
SQLite-backed memory for MCP agents. Ships a CLI and programmatic API.
Highlights
Uses
sqlite3(async) for broad prebuilt support; no brittle native build steps.Optional FTS5 indexing for better search; falls back to
LIKEwhen unavailable.Input validation and sane limits to guard against oversized payloads.
Auto-generates semantic embeddings via OpenAI when a key is provided; otherwise falls back to text-only scoring.
Install / Run
Quick run (no install):
Install locally (dev dependency) and run:
Other ecosystem equivalents:
pnpm:
pnpm dlx @ideadesignmedia/memory-mcp --db=... --topk=6yarn (classic):
yarn dlx @ideadesignmedia/memory-mcp --db=... --topk=6
CLI usage
You can invoke it directly (if globally installed) or via npx as shown above.
Optional flags:
--embed-key=sk-...supply the embedding API key (same asMEMORY_EMBEDDING_KEY).--embed-model=text-embedding-3-smalloverride the embedding model (same asMEMORY_EMBED_MODEL).
Codex config example
Using npx so no global install is required. Add to ~/.codex/config.toml:
Programmatic API
Tools
All tools are safe for STDIO. The server writes logs to stderr only.
memory-remember
Create a concise memory for an owner. Provide
ownerId,type(slot), shortsubject, andcontent. Optionally setimportance(0–1),ttlDays,pinned,consent,sensitivity(tags), andembedding.Response is minimal for LLMs (no embeddings or extra metadata):
{ "id": "mem_...", "item": { "id": "mem_...", "type": "preference", "subject": "favorite color", "content": "blue" }, "content": [ { "type": "text", "text": "{\"id\":\"mem_...\",\"type\":\"preference\",\"subject\":\"favorite color\",\"content\":\"blue\"}" } ] }
memory-recall
Retrieve up to
krelevant memories for an owner via text/semantic search. Accepts optional natural-languagequery, optionalembedding, and optionalslot(type).Response is minimal per item:
{ id, type, subject, content }.
memory-list
List recent memories for an owner, optionally filtered by
slot(type).Response is minimal per item:
{ id, type, subject, content }.
memory-forget
Delete a memory by
id. Consider recalling/listing first if you need to verify the item.
memory-export
Export all memories for an owner as a JSON array. Useful for backup/migration.
Response items are minimal:
{ id, type, subject, content }.
memory-import
Bulk import memories for an owner. Each item mirrors the memory schema (
type,subject,content, metadata, optionalembedding). Max 1000 items per call.
Embeddings
Embeddings
Embeddings are optional—without a key the server relies on text search and recency heuristics.
Set MEMORY_EMBEDDING_KEY (or pass --embed-key=... to the CLI) to automatically create embeddings when remembering/importing memories and to embed recall queries. The default model is text-embedding-3-small; override it with MEMORY_EMBED_MODEL or --embed-model. To disable the built-in generator when using the programmatic API, pass embeddingProvider: null to createMemoryMcpServer. To specify a key programmatically, pass embeddingApiKey: "sk-...".
Limits and validation
memory-remember:
subjectmax 160 chars,contentmax 1000,sensitivityup to 32 tags.memory-recall: optional
querymax 1000 chars; if omitted, listing is capped internally.memory-import: up to 1000 items per call; each item has the same field limits as remember.
This server cannot be installed
local-only server
The server can only run on the client's local machine because it depends on local resources.
SQLite-backed memory storage for MCP agents with optional semantic search via OpenAI embeddings, enabling agents to remember, recall, and manage contextual information across sessions.