Skip to main content
Glama

memem

Persistent, self-evolving memory for Claude Code. Stop re-explaining your project every session.

CI memem MCP server License: MIT Python 3.11+

For LLM/AI tool discovery, see llms.txt.

  ███╗   ███╗███████╗███╗   ███╗███████╗███╗   ███╗
  ████╗ ████║██╔════╝████╗ ████║██╔════╝████╗ ████║
  ██╔████╔██║█████╗  ██╔████╔██║█████╗  ██╔████╔██║
  ██║╚██╔╝██║██╔══╝  ██║╚██╔╝██║██╔══╝  ██║╚██╔╝██║
  ██║ ╚═╝ ██║███████╗██║ ╚═╝ ██║███████╗██║ ╚═╝ ██║
  ╚═╝     ╚═╝╚══════╝╚═╝     ╚═╝╚══════╝╚═╝     ╚═╝
  persistent memory for Claude Code

What is memem?

memem is a Claude Code plugin that gives Claude persistent memory across sessions. A background miner extracts durable lessons (decisions, conventions, bug fixes, preferences) from your completed sessions, stores them as markdown in an Obsidian vault, and automatically surfaces relevant ones as an Active Memory Slice working state. An explicit narrative assembly path still exists, but the default runtime context is slice-first.

It's local-first: no cloud services, no API keys required, no vendor lock-in. Everything lives in ~/obsidian-brain/memem/memories/ as human-readable markdown.

What's new in v1.1

  • Layered memory becomes real end-to-end. Every memory now lives in one of four layers (L0/L1/L2/L3) at save time, not just at mining time. memory_save accepts an optional layer param (Claude can override) and auto-classifies otherwise. The slice engine pins L0 (project identity) on every prompt and gates L3 (rare archival) behind explicit search.

  • Slice as universal recall format. memory_search, memory_get, memory_timeline, memory_recall, and context_assemble all return slice-formatted output via a single render_slice_markdown dispatcher. context_assemble composes via active_memory_slice rather than rolling its own briefing.

What's new in v1.0 (miner hardening)

A 16-module refactor closed the entire spawn-storm class of bugs that had previously taken down hosts. The miner now uses start_new_session=True + os.killpg for process-group cleanup on timeout, an inverted TransientError/PermanentError taxonomy with PermanentError as default, persisted attempt counters with DLQ at MAX_FAILURES, a SIGTERM-drained graceful shutdown, SQLite WAL state storage, a hand-rolled circuit breaker, structured JSON logs with RotatingFileHandler, and a 5-in-60s wrapper crash guard.

When should I use memem?

Use memem if:

  • You use Claude Code daily and keep re-explaining your project to every new session

  • You want durable memory you can browse and edit as markdown

  • You like local-first tools with zero vendor lock-in

  • You already use Obsidian (memem plugs straight into your vault)

How is memem different from CLAUDE.md?

CLAUDE.md is a single hand-edited file per project. memem gives you:

  • Automatic extraction — no manual note-taking, the miner captures lessons from every completed session

  • Query-aware context — only the memories relevant to your current question get injected, not a static dump

  • Self-evolving — memories merge, update, and deprecate automatically as your project evolves

  • Cross-project — works across every Claude Code project you use, not scoped to one repo

  • Security scanning — every write is scanned for prompt injection and credential exfiltration

  • Browsable — Obsidian vault with graph view and backlinks for free

Architecture — slice-first runtime

memem uses layered recall plus a slice-first runtime kernel inspired by claude-mem and mem0. Instead of treating memory as one big briefing, it first turns recall results into an explicit working state:

   Session start / user prompt
   ┌─────────────────────────────┐
   │ Candidate generation        │
   │   • memories / graph        │
   │   • playbooks               │
   │   • runtime environment     │
   │   • current artifacts       │
   └──────────┬──────────────────┘
              │
              ▼
   ┌─────────────────────────────┐
   │ Activation judgement        │
   │   • goals                   │
   │   • constraints             │
   │   • decisions / failures    │
   │   • artifacts / tensions    │
   └─────────────────────────────┘
              │
              ▼
   ┌─────────────────────────────┐
   │ Active Memory Slice         │ → rendered markdown working state
   │ generate_prompt_context()   │    used by hooks, MCP, and CLI
   └─────────────────────────────┘

The lower-level recall tools still exist for explicit drilling:

  1. memory_search(query) -> compact index

  2. memory_get(ids=[...]) -> full content

  3. memory_timeline(id) -> chronological thread

  4. context_assemble(query, project) -> optional secondary narrative briefing

Memory layers (auto-classified at save AND mining time; Claude can override):

Layer

Purpose

Slice behavior

L0

Project identity — tech stack, repo structure, core conventions

Always pinned in every active slice for that project (anchor score 0.95)

L1

Generic conventions — testing, style, commit patterns

Ranked + scored alongside L2

L2

Domain-specific — most memories (default)

Ranked + scored (default search hits)

L3

Rare/archival — niche failure modes, one-off lessons

Excluded from auto-recall; only via explicit memory_search/memory_get

A heuristic (mining.py:classify_layer) assigns layers based on importance, structural tags, content length, and the per-project L0 cap. memory_save(content, ..., layer=N) accepts an explicit override (0-3) when Claude judges better than the heuristic.

Token efficiency: session start injects L0 verbatim plus a compact index for L1-L2 (~50 tokens per entry: ID + L + title + snippet). Claude drills into specific memories via memory_get(ids=[...]) only when it needs full detail.

Active Memory Slice runtime kernel:

For ongoing work, active_memory_slice(query, scope_id, ...) is the default runtime path. It uses memory_search/FTS/graph/playbooks/transcripts plus runtime environment and current artifacts as candidate generation, then activates a structured working state:

Memory Vault
→ Candidate Generation
→ Activation Judgement
→ Active Memory Slice
→ Delta Proposals
→ Memory Vault

The slice explicitly separates goals, constraints, background, decisions, preferences, failure patterns, artifacts, open tensions, and candidate deltas. If you pass session_id together with runtime context such as task_mode and repo_path, memem also carries forward continuity across slices and records slice history under ~/.memem/.

Default runtime behavior is still non-mutating. Delta proposals are validated and surfaced in the slice, but safe writeback only runs when you opt in via writeback_preview=True or auto_commit_safe=True.

Opt-in features:

  • MEMEM_SHOW_BANNER=1 — show a one-line status banner at session start (off by default)

  • MEMEM_PRETOOL_GATING=1 — enrich Read tool calls with memories about the target file (off by default)

  • MEMEM_TOPIC_SHIFT_THRESHOLD=0.3 — keyword overlap threshold for topic-shift re-firing (default 0.3)

How do I install memem?

Copy-paste:

claude plugin marketplace add TT-Wang/memem
claude plugin install memem@memem-marketplace

If you already added the marketplace once, future installs only need the second command.

Then:

  1. restart Claude Code if it was already open

  2. open any project

  3. send your first normal message

  4. memem will show a welcome/status message and offer the mining options

That's it. On first run, bootstrap.sh self-heals everything:

  1. Verifies Python ≥ 3.11 — or installs it via uv python install 3.11 if your system Python is too old

  2. Installs uv if missing (via the official Astral installer)

  3. Syncs deps into a plugin-local .venv (hash-cached against uv.lock)

  4. Creates and canary-tests ~/.memem/ and ~/obsidian-brain/

  5. Writes ~/.memem/.capabilities (used for degraded-mode decisions)

  6. Execs the real MCP server

First run: ~5 seconds. Every run after: ~100ms. No separate pip install step.

Nothing mines until you opt in. memem is strictly opt-in as of v0.9.0 — install does not start the miner or touch your sessions. Type /memem to see status and choose what to do next. You can start mining two ways:

  • /memem-mine — mine new sessions only (from now on)

  • /memem-mine-history — mine everything, including past history (uses Haiku API credits)

Or just tell Claude "start mining new sessions" / "start mining everything including history" — it knows what to do.

  • choose /memem-mine if you only want memory from new sessions going forward

  • choose /memem-mine-history if you want memem to process your old Claude Code sessions too

If you are unsure, start with /memem-mine. It is the safer and cheaper default.

What happens on my first Claude Code session?

At session start, the SessionStart hook tries to prime a slice-first working state for the current project scope. On each user prompt, the UserPromptSubmit hook regenerates the slice for the current query. If you just installed memem and have no relevant context yet, the hooks stay quiet and Claude proceeds normally.

You work normally. The miner daemon runs silently in the background. When your session ends and settles for 5 minutes, the miner extracts memories from the transcript using Claude Haiku and writes them to your vault.

During the session: every user prompt goes through active_memory_slice, which builds a structured working-state briefing from the relevant memories, playbooks, transcripts, graph neighbors, environment facts, and current artifacts. The hooks automatically pass session id and working directory, and the prompt hook infers a task mode when the host does not provide one, so ongoing work can carry constraints, artifacts, and tensions forward across slices. You see an active slice prompt with goals, constraints, background, decisions, failure patterns, open tensions, and artifacts. Claude starts with the current working state instead of a generic briefing.

30-Second Setup

claude plugin marketplace add TT-Wang/memem
claude plugin install memem@memem-marketplace

Then in Claude Code:

/memem

And choose one:

/memem-mine

or

/memem-mine-history

What does memem save?

It saves durable knowledge, not session logs:

  • Architecture decisions with rationale ("we use RS256 JWTs because HS256 can't be verified by third parties without sharing the secret")

  • Conventions ("tests go in tests/ not spec/", "commit messages use imperative mood", "never import from internal/ outside its package")

  • Bug fixes you might forget ("bcrypt.compare is async — must await", "timezone math must use dayjs.utc() or DST shifts the result by an hour")

  • User preferences ("prefer single commits, not stacked PRs", "terse responses — no trailing summaries", "ask before running migrations in prod")

  • Known issues & workarounds ("JWT_SECRET defaults to 'secret' if unset — tracked in #123", "pnpm install hangs on corporate VPN, use --network-timeout=600000")

  • Environment & tooling facts ("project uses Poetry, not pip", "CI runs on Node 20 but local defaults to 22 — pin with nvm use", "Redis must be running on :6380 not :6379")

  • Project structure & invariants ("auth middleware requires Redis", "all DB writes go through repo/ layer, never raw SQL in handlers")

  • Failure patterns & post-mortems ("mocking the DB hid a broken migration last quarter — integration tests must hit a real DB", "don't ship on Fridays after the 2025-11 rollback incident")

  • Third-party quirks ("Stripe webhooks retry for 3 days — idempotency key is mandatory", "OpenAI streaming drops the final token if client closes early")

  • Domain knowledge ("a 'merchant' in our schema is what the legal team calls a 'counterparty'", "revenue is recognized at ship time, not order time")

It does NOT save:

  • Raw session transcripts (those are searchable via transcript_search, not stored as memories)

  • Trivial or obvious facts

  • Session outcomes ("today I worked on X")

Where does memem store my memories?

Store

Path

Purpose

Memories

~/obsidian-brain/memem/memories/*.md

Source of truth (human-readable markdown)

Playbooks

~/obsidian-brain/memem/playbooks/*.md

Per-project curated briefings

Search DB

~/.memem/search.db

SQLite FTS5 index (machine-fast lookup)

Graph DB

~/.memem/graph.db

Rebuildable typed/scored memory-edge index

Telemetry

~/.memem/telemetry.json

Access tracking (atomic writes)

Event log

~/.memem/events.jsonl

Append-only audit trail

Capabilities

~/.memem/.capabilities

Degraded-mode flags written by bootstrap

Bootstrap log

~/.memem/bootstrap.log

First-run diagnostics

You can point memem elsewhere via MEMEM_DIR and MEMEM_OBSIDIAN_VAULT env vars.

What are the MCP tools Claude can call?

All recall tools return slice-formatted markdown via a unified render_slice_markdown dispatcher (introduced in v1.1) so output structure is consistent across tools.

Tool

What it does

memory_save(content, title, tags, layer?)

Store a lesson. Security-scanned for prompt injection and credential exfil before writing. layer is optional (0-3); auto-classifies via classify_layer if omitted.

memory_search(query, limit, scope_id)

[L1] Compact index slice — IDs + layer + title + 1-line snippet. Use first to narrow candidates.

memory_get(ids, scope_id)

[L2] Full content slice for specific memory IDs. Use after memory_search.

memory_timeline(memory_id, scope_id)

[L3] Chronological thread via related[] graph + same-project window.

memory_recall(query, scope_id, limit)

Legacy alias — search + full content in one slice.

memory_list(scope_id)

List all memories with stats, grouped by project.

memory_import(source_path)

Bulk import from files, directories, or chat exports.

transcript_search(query)

Search raw Claude Code session JSONL logs (not the mined memories).

context_assemble(query, project)

Composite briefing: calls active_memory_slice 1-2 times (project + general scope when sparse), merges into one assembled slice.

memory_graph(memory_id)

Inspect typed/scored graph neighbors for one memory.

memory_graph_audit()

Report graph quality issues: orphans, dead links, hubs, stale edges.

memory_graph_rebuild(scope_id)

Rebuild the graph side index from the Obsidian vault.

active_memory_slice(query, scope_id, session_id?, task_mode?, repo_path?, writeback_preview?, auto_commit_safe?)

Build a structured runtime working state for current work, with optional continuity and controlled writeback.

How do I inspect slices or writeback manually?

Use the CLI when you want raw slice JSON, continuity debugging, or explicit writeback preview:

python3 -m memem.server slice "continue auth rollout" --scope memem --session-id sess-42 --cwd "$PWD" --task-mode coding --json --no-llm
python3 -m memem.server slice "continue auth rollout" --scope memem --session-id sess-42 --cwd "$PWD" --task-mode coding --writeback-preview --json --no-llm
python3 -m memem.server slice "continue auth rollout" --scope memem --session-id sess-42 --cwd "$PWD" --task-mode coding --auto-commit-safe --json --no-llm

Semantics:

  • default slice is read-side and non-mutating

  • --writeback-preview runs the delta pipeline in dry-run mode

  • --auto-commit-safe commits only deltas classified as auto-safe

What slash commands does memem add?

  • /memem — welcome, status, help

  • /memem-status — memory count, projects, search DB size, miner health

  • /memem-doctor — preflight health check with fix instructions for any blocker

  • /memem-mine — start the miner daemon manually (normally auto-starts)

  • /memem-mine-history — opt-in: mine all your pre-install Claude Code sessions

What if the claude CLI isn't on my PATH?

memem enters degraded mode — it still works, just without Haiku-powered context assembly and smart recall. You get FTS-only keyword recall instead of query-tailored briefings. Every session shows [memem] N memories · miner OK · assembly degraded (claude CLI missing — FTS-only recall) at the top of the context, so you know why.

This is by design: missing optional dependencies should degrade, not fail.

How do I diagnose problems?

Run /memem-doctor. It runs the same preflight the bootstrap shim runs (Python version, mcp importable, claude CLI on PATH, directory writability, uv available), then prints a report labelled HEALTHY, DEGRADED, or FAILING with explicit fix instructions for each blocker.

For deeper debugging:

tail -f ~/.memem/bootstrap.log   # first-run shim log
tail -f ~/.memem/miner.log       # miner daemon log
cat ~/.memem/events.jsonl        # memory operation audit trail
python3 -m memem.server --status   # detailed status dump

How does the mining pipeline work?

Session ends → miner daemon sees the JSONL file in ~/.claude/projects/
  → Waits 5 minutes for the file to "settle" (no more writes)
  → Filters to human messages + assistant prose (strips tool calls, system reminders)
  → One Haiku call with the full context: "extract durable lessons"
  → Haiku returns JSON array of memory candidates
  → Each candidate runs: security scan → dedup check → contradiction detection → save
  → Index rebuilt, per-project playbooks grown and refined
  → Session marked COMPLETE in ~/.memem/.mined_sessions

How does the recall pipeline work?

First message in a new session → auto-recall.sh hook fires
  → Reads ~/.memem/.capabilities for status banner
  → Builds an active memory slice from recall candidates + graph/playbook/transcript context
  → Emits a structured "Active Memory Slice" prompt block
  → If the slice engine is unavailable → falls back to compact recall
  → Either way, Claude starts its reply with active work-state context already loaded

Architecture

memem is split into small, focused modules:

  • models.py — data types, path constants

  • security.py — prompt injection + credential exfil scanning

  • telemetry.py — access tracking, event log (atomic writes, fcntl-locked)

  • search_index.py — SQLite FTS5 index

  • graph_index.py — typed/scored related-memory graph side index

  • active_slice.pyMemoryItem + ActiveMemorySlice schemas, render_slice_markdown dispatcher

  • activation.py — heuristic + bounded LLM activation judgement

  • boundaries.py — scope/deprecated/budget/redundancy filters

  • delta.py — non-mutating candidate delta proposals

  • active_slice_engine.py — candidate generation, layer-aware (L0 anchors + L3 gating), build_slice public entry

  • obsidian_store.py — memory I/O, dedup scoring, contradiction detection, layer auto-classification on save

  • recall.py — slice-format recall tools (memory_search/memory_get/memory_timeline/memory_recall)

  • playbook.py — per-project playbook grow + refine

  • assembly.pycontext_assemble composes via active_memory_slice

  • capabilities.py — runtime feature detection for degraded mode

  • storage.py — server-lifecycle helpers (PID management, miner auto-start)

  • server.py — thin MCP entrypoint (FastMCP imported lazily)

  • cli.py — command dispatcher for non-MCP entrypoints

  • mining.py — session mining pipeline (Haiku extraction)

  • miner_daemon.py — long-running daemon with structlog JSON output, semaphore concurrency cap, heartbeat

  • miner_protocol.py — exit codes (FATAL=75 / TRANSIENT=20) and status constants

  • miner_errors.pyTransientError / PermanentError taxonomy (PermanentError default)

  • miner_circuit_breaker.py — hand-rolled CB; opens after 5 consecutive PermanentErrors, 5min cooldown

  • session_state.py / session_state_db.py — SQLite WAL state for the miner (auto-migrates from JSONL on first run)

Multi-signal recall scoring:

  • 50% FTS relevance

  • 15% recency (0.995^hours decay)

  • 15% access history (usage reinforcement)

  • 20% importance (1-5 scale from Haiku)

Related-memory graph:

The Obsidian markdown files remain the source of truth. The related: [...] frontmatter stays intentionally simple so memories are portable and readable. memem also builds ~/.memem/graph.db, a local SQLite side index with typed, scored edges such as same_topic, supports, depends_on, supersedes, and contradicts. Recall uses this graph when available and falls back to the Markdown related field if the graph has not been built yet.

Useful maintenance commands:

memem graph rebuild
memem graph audit
memem graph stats
memem graph neighbors <memory-id>

Memory schema (markdown frontmatter):

---
id: uuid
schema_version: 1
title: "descriptive title"
project: project-name
tags: [mined, project-name]
related: [id1, id2, id3]
created: 2026-04-13
updated: 2026-04-13
source_type: mined | user | import
source_session: abc12345
importance: 1-5
status: active | deprecated
valid_to:                     # set when deprecated
contradicts: [id1]            # flagged conflicts
---

Configuration

Env var

Default

Purpose

MEMEM_DIR

~/.memem

State directory (PID files, search DB, logs)

MEMEM_OBSIDIAN_VAULT

~/obsidian-brain

Vault location

MEMEM_EXTRA_SESSION_DIRS

(none)

Colon-separated extra session dirs to mine

MEMEM_MINER_SETTLE_SECONDS

300

Seconds to wait before mining a completed session

MEMEM_SKIP_SYNC

0

Bootstrap skips uv sync when set to 1 (dev only)

memem works without Obsidian — it just writes markdown. But Obsidian gives you graph view and backlinks for free:

  1. Download: https://obsidian.md (free)

  2. Open ~/obsidian-brain as a vault

  3. Memories appear in memem/memories/, playbooks in memem/playbooks/

  4. Use Graph View to see how memories link via the related field

Requirements

  • Claude Code

  • Python ≥ 3.11

  • uv (auto-installed by bootstrap.sh on first run)

  • claude CLI on PATH (optional — required for Haiku-powered assembly; degraded mode works without it)

Development

git clone https://github.com/TT-Wang/memem.git
cd memem
pip install -e ".[dev]"
pytest             # 428 tests
ruff check .       # lint
mypy memem         # type check

See CONTRIBUTING.md for the PR process and CHANGELOG.md for version history.

Works great with

  • forge — Structured planning, parallel execution, and deep validation for Claude Code. memem + forge is the recommended pairing: forge plans and executes multi-file changes, memem remembers what worked across runs. Forge's memory_save patterns land in memem's recall index, so next week's run starts with last week's lessons already loaded.

License

MIT

Install Server
A
license - permissive license
A
quality
C
maintenance

Maintenance

Maintainers
Response time
0dRelease cycle
12Releases (12mo)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TT-Wang/memem'

If you have feedback or need assistance with the MCP directory API, please join our Discord server