Linksee Memory
Planned integration for vector search capabilities via sqlite-vec once an embedding backend is chosen, enabling enhanced memory recall functionality.
Mentioned as one of the MCP servers configured in telemetry examples, indicating compatibility with Slack MCP server integration.
linksee-memory
Local-first agent memory MCP. A cross-agent brain for Claude Code, Cursor, and ChatGPT Desktop โ with a token-saving file diff cache that nobody else does.
v0.2.0 makes the package English-first for global launch: the bundled auto-invocation skill is now bilingual (EN + JP), session-extractor patterns cover common English keywords (
let's go,pivot,doesn't work,same error again, etc.), and the install CLI shows test examples in both languages. No API changes. See CHANGELOG.
๐ Landing page: linksee-site.vercel.app (includes non-developer onboarding for Claude Desktop / Cursor / Claude Code)
๐ฌ See it in action
Without linksee-memory โ Monday morning, new Claude session:
You: We deployed last week but it crashed. How did we fix it?
Claude: I don't have access to previous sessions. Can you describe
what happened and walk me through the problem?
[30 minutes of log-spelunking and re-explanation]With linksee-memory โ Same question, different outcome:
You: We deployed last week but it crashed. How did we fix it?
Claude: Let me check my caveats...
๐ง [caveat] NextAuth sessions invalidate when JWT_SECRET
rotates โ redeploy all affected projects in parallel.
(from session 2026-04-13, importance: 0.9)
Is this the deploy you're asking about? We hit it when
we rotated secrets mid-flow.
You: Yes, exactly. Let's not repeat that.That single caveat memory is what separates "flat fact storage" from "the agent actually remembers the WHY". linksee-memory stores it across six explicit layers so retrieval stays explainable.
๐ง The 6-layer structure
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ๐ฏ goal โ what the user is working toward โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐งญ context โ why this, why now โ constraints, people โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐ emotion โ user tone signals (frustration, etc.) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐ implementation โ how it was done (+ what failed) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ ๏ธ caveat โ "never do this again" ยท auto-protected โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐ฑ learning โ patterns distilled from cold memories โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
Ranked recall via relevance ร heat ร momentum ร importance
Returns match_reasons explaining each hitEvery memory is tagged with exactly one layer. caveat-layer entries are protected from auto-forgetting. Cold low-importance memories get compressed into learning entries via consolidate().
What it does
Most "agent memory" services (Mem0, Letta, Zep) save a flat list of facts. Then the agent looks at "edited file X 30 times" and has no idea why. linksee-memory keeps the WHY.
It is a Model Context Protocol (MCP) server that gives any AI agent four superpowers:
Mem0 / Letta / Zep | Claude Code auto-memory | linksee-memory | |
Cross-agent | โณ (cloud) | โ Claude only | โ single SQLite file |
6-layer WHY structure | โ flat | โ flat markdown | โ goal / context / emotion / impl / caveat / learning |
File diff cache | โ | โ | โ AST-aware, 50-99% token savings on re-reads |
Active forgetting | โณ | โ | โ Ebbinghaus curve, caveat layer protected |
Local-first / private | โ | โ | โ |
Three pillars
Token savings via
read_smartโ sha256 + AST/heading/indent chunking. Re-reads return only diffs. Measured 86% saved on a typical TS file edit, 99% saved on unchanged re-reads.Cross-agent portability โ single SQLite file at
~/.linksee-memory/memory.db. Same brain for Claude Code, Cursor, ChatGPT Desktop.WHY-first structured memory โ six explicit layers (
goal/context/emotion/implementation/caveat/learning). Solves "flat fact memory is useless without goals".
Install
npm install -g linksee-memory
linksee-memory-import --help # bundled importer for Claude Code session historyOr use npx ad hoc:
npx linksee-memory # starts the MCP server on stdioThe default database lives at ~/.linksee-memory/memory.db. Override with the LINKSEE_MEMORY_DIR environment variable.
Register with Claude Code
claude mcp add -s user linksee -- npx -y linksee-memoryRestart Claude Code. Tools appear as mcp__linksee__remember, mcp__linksee__recall, mcp__linksee__recall_file, mcp__linksee__read_smart, mcp__linksee__forget, mcp__linksee__consolidate.
Recommended: install the skill (auto-invocation)
Installing the MCP alone doesn't teach Claude Code when to call recall / remember. The bundled skill fixes that:
npx -y linksee-memory-install-skillThis copies a SKILL.md to ~/.claude/skills/linksee-memory/. Claude Code auto-discovers it and fires the skill on phrases like "ๅใซโฆ", "ใพใๅใใจใฉใผ", "่ฆใใฆใใใฆ", new task starts, file edits, and so on โ no need to say "use linksee-memory".
Flags: --dry-run, --force, --help.
Optional: auto-capture every session (Stop hook)
Add to ~/.claude/settings.json to record every Claude Code session to your local brain automatically:
{
"hooks": {
"Stop": [
{
"matcher": "",
"hooks": [
{ "type": "command", "command": "npx -y linksee-memory-sync" }
]
}
]
}
}Each turn end takes ~100 ms. Failures are silent (Claude Code never blocks). Logs at ~/.linksee-memory/hook.log.
Tools
Tool | Purpose |
| Store memory in 1 of 6 layers for an entity. Rejects pasted assistant output / CI logs unless |
| FTS5 + heat ร momentum ร importance composite ranking with |
| Complete edit history of a file across all sessions, with per-edit user-intent context. |
| v0.1.0 Atomic edit of an existing memory. Preserves |
| v0.1.0 List what the memory knows about โ cheapest "what do I know?" primitive. Filter by |
| Diff-only file read. Returns full content on first read, ~50 tokens on unchanged re-reads, only changed chunks on real edits. |
| Explicit delete OR auto-sweep based on |
| Sleep-mode compression: cluster cold low-importance memories โ protected learning-layer summary. Supports |
CLI utilities
Command | Purpose |
| MCP server (stdio) |
| Claude Code Stop-hook entry point |
| Batch-import Claude Code session JSONL history |
| Install the Claude Code Skill that teaches the agent when to call recall/remember/read_smart |
| v0.1.0 Summary of the local DB (entity count / layer breakdown / top entities / top edited files). Add |
The 6 memory layers
Each entity (person / company / project / file / concept) can have memories across six layers. The layer encodes meaning, not category:
{
"goal": { "primary": "...", "sub_tasks": [], "deadline": "..." },
"context": { "why_now": "...", "triggering_event": "...", "when": "..." },
"emotion": { "temperature": "hot|warm|cold", "user_tone": "..." },
"implementation": {
"success": [{ "what": "...", "evidence": "..." }],
"failure": [{ "what": "...", "why_failed": "..." }]
},
"caveat": [{ "rule": "...", "reason": "...", "from_incident": "..." }],
"learning":[{ "at": "...", "learned": "...", "prior_belief": "..." }]
}caveatmemories are auto-protected from forgetting (pain lessons, never lost).goalmemories bypass decay while the goal is active.
Architecture
A single SQLite file (better-sqlite3 + FTS5 trigram tokenizer for JP/EN) contains five layers:
Layer 1 โ
entities(facts: people / companies / projects / concepts / files)Layer 2 โ
edges(associations, graph adjacency)Layer 3 โ
memories(6-layer structured meanings per entity)Layer 4 โ
events(time-series log for heat / momentum computation)Layer 5 โ
file_snapshots+session_file_edits(diff cache + conversationโfile linkage)
The conversationโfile linkage is the key. Every file edit captured by the Stop hook is stored alongside the user message that drove the edit. So recall_file("server.ts") returns "this file was edited 30 times across 3 days, and here are the actual user instructions that motivated each change".
Why the design choices
Local-first โ your conversation history is private. Nothing leaves your machine.
Single file โ
memory.dbis one portable artifact. Backup = file copy.MCP stdio โ works with every agent that speaks MCP, no plugins per host.
Reuses proven schemas โ
heat_score/momentum_scoreported from a production sales-intelligence codebase. Rule-based, no LLM dependency in the hot path.
Roadmap
โ Core 6 MCP tools (
remember/recall/recall_file/forget/consolidate/read_smart)โ Stop-hook auto-capture for Claude Code
โ JP/EN trigram FTS5
๐ง
PreToolUsehook to auto-interceptRead(zero-config token savings)๐ง Cursor + ChatGPT Desktop adapters
๐ฎ Vector search via
sqlite-veconce an embedding backend is chosen (Ollama / API / etc.)๐ฎ Optional anonymized telemetry โ MCP-quality intelligence layer
Comparison with Claude Code auto-memory
Claude Code ships a built-in memory feature at ~/.claude/projects/<path>/memory/*.md โ flat markdown notes for user preferences. linksee-memory complements it:
auto-memory = your scrapbook of "remember I prefer X"
linksee-memory = structured cross-agent brain with file diff cache and per-edit WHY
Use both.
Telemetry (opt-in, off by default)
linksee-memory ships with opt-in anonymous telemetry that helps us understand which MCP servers and workflows actually work in the wild. Nothing is sent unless you explicitly enable it. No conversation content, no file content, no entity names, no project paths โ ever.
Enable
export LINKSEE_TELEMETRY=basic # opt in
export LINKSEE_TELEMETRY=off # opt out (or just unset the variable)Exactly what gets sent (Level 1 contract)
After each Claude Code session ends, the Stop hook sends one POST to https://kansei-link-mcp-production.up.railway.app/api/telemetry/linksee containing only these fields:
Field | Example | What it is |
|
| Random UUID generated locally on first opt-in. Stored at |
|
| Package version |
|
| How many turns the session had |
|
| How long the session lasted |
|
| Counts only |
|
| Names of MCP servers configured (from |
|
| Percent distribution of file extensions touched |
| counts | Tool usage counters |
What is NEVER sent:
โ Conversation messages (user or assistant)
โ File contents
โ Entity names, project names, file paths, URLs
โ Memory-layer text (goal / context / emotion / impl / caveat / learning)
โ Authentication tokens, API keys, secrets
โ Your IP address (only a one-way hash for abuse detection)
Why we ask
Aggregated MCP-usage data helps the KanseiLink project rank which agent integrations actually work for real developers. If you're happy to contribute, LINKSEE_TELEMETRY=basic takes 1 second to set and helps the entire MCP ecosystem improve.
The full payload schema and validation logic is open-source โ read src/lib/telemetry.ts if you want to verify exactly what leaves your machine.
Pricing
Free forever.
linksee-memory is local-first and runs entirely on your machine. There is no hosted component you need to pay for. The SQLite DB lives in your home directory; backup = file copy.
No account, no credit card, no API key. Just install and use.
Troubleshooting
Verify the skill was installed:
ls ~/.claude/skills/linksee-memory/SKILL.mdIf absent, run
npx -y linksee-memory-install-skill.Restart Claude Code. Skills are indexed on session start.
Check that the MCP is registered under the name
linksee(the skill expectsmcp__linksee__*tool names):claude mcp list | grep linkseeIf it's registered as something else, either re-register or edit
~/.claude/skills/linksee-memory/SKILL.mdto match.
Check the hook log:
cat ~/.linksee-memory/hook.logRun a manual test:
echo '{"session_id":"test","transcript_path":"/path/to/some.jsonl"}' | npx linksee-memory-syncMake sure the
Stophook in~/.claude/settings.jsonpoints tonpx -y linksee-memory-sync(not the old-import).
v0.0.6+ fixed the entity detection bug that collapsed all memories into the session's starting cwd. To re-index existing history with correct project attribution, run:
npx linksee-memory-import --allThe importer is idempotent (wipes existing session data before re-inserting). Typical runtime: a few minutes for hundreds of sessions. Expect a dramatic improvement in recall precision afterward.
Reduce max_tokens:
recall({ query: "...", max_tokens: 800 }) // default is 2000Or narrow with entity_name and layer:
recall({ query: "...", entity_name: "my-project", layer: "caveat" })rm -rf ~/.linksee-memory # nuke everything; next run creates a fresh DBOr delete individual memories via the forget tool with a specific memory_id.
Run consolidate โ it clusters old cold memories into compressed learning-layer summaries:
consolidate({ scope: "all", min_age_days: 7 })Caveat and active-goal layers are always preserved. Consider scheduling a weekly run via cron / Task Scheduler.
FAQ
Three axes:
Local-first: those tools require cloud accounts and send your data to their servers. linksee-memory runs entirely on your machine โ one SQLite file, no network calls by default.
WHY-layered: they store flat facts or knowledge-graph nodes. linksee-memory has 6 explicit layers (
goal/context/emotion/implementation/caveat/learning) so retrieval returns structured reasoning, not just data.File diff cache:
read_smarttool saves 86โ99% of tokens on file re-reads via AST-aware chunking. None of the memory services do this โ it's a feature usually shipped in IDEs.
Claude Code's auto-memory is Claude-only (doesn't help if you switch to Cursor or ChatGPT Desktop) and stores flat markdown with no structure. linksee-memory is the same local-first principle but:
Works across Claude Code, Cursor, ChatGPT Desktop (shared SQLite)
Structured 6-layer format makes recall explainable
Provides explicit forget/consolidate primitives rather than the agent guessing
Yes โ see tools/bench-read-smart.ts in the repo. The read_smart tool:
Hashes file content on first read, returns full content + chunk metadata (AST/heading/indent boundaries).
On re-read with unchanged mtime+sha256, returns
~50 tokensof "unchanged" confirmation instead of re-sending the file.On real edits, returns only the changed chunks as full content + unchanged chunks as metadata-only references.
For a typical TypeScript file edit in an agentic loop, this cuts round-trip token costs by ~86%. On pure re-reads (user navigating back to a previously-read file), savings exceed 99%.
The default is no sync โ the SQLite file lives at ~/.linksee-memory/memory.db and stays there. If you want multi-machine sync, put that directory under Syncthing / iCloud Drive / Dropbox / Google Drive โ it's a single file, so any file-sync tool works. (Avoid simultaneous edits from two machines while the MCP server is running on both; SQLite's WAL mode handles single-writer well but multi-writer conflicts can corrupt.)
Two mechanisms:
Ebbinghaus forgetting: cold low-importance memories decay naturally, eligible for auto-forget sweeps.
caveatlayer and memories withimportance โฅ 0.9are always protected.consolidate(): compresses clusters of cold low-importance memories by entity into a singlelearning-layer summary, then deletes the originals. Run vialinksee-memory-consolidateCLI (or schedule weekly).
In practice a solo developer hits ~100MB after 6 months of heavy use. A year-old DB I tested with 80K memories still recalls in <10ms.
Yes โ any MCP-compatible client works:
Claude Code:
claude mcp add -s user linksee -- npx -y linksee-memoryClaude Desktop: add to
claude_desktop_config.json(see onboarding on the LP)Cursor: add to MCP settings in Cursor
ChatGPT Desktop: same pattern once MCP support ships
Custom agent: the MCP stdio protocol is documented at modelcontextprotocol.io
By default: zero network calls, zero telemetry. There's an optional Level-1 telemetry mode you can enable that sends anonymized aggregate metrics (tool call counts, error rates, latency percentiles โ never memory content, never file paths, never queries). The exact payload schema is documented in the Telemetry section and you see every byte before opting in.
After install, in a new Claude session ask: "Can you remember that I prefer TypeScript over JavaScript?" Claude should confirm it called mcp__linksee__remember and stored this. Then in a different session ask: "What languages do I prefer?" It should recall via mcp__linksee__recall and return the preference with match_reasons showing why.
Support
Issues & bug reports: github.com/michielinksee/linksee-memory/issues
Feature requests: open an issue with the
enhancementlabelSecurity concerns: see SECURITY.md if present, or file a private advisory on GitHub
Company: Synapse Arrows PTE. LTD. (Singapore)
Changelog
v0.2.0 โ English-first launch readiness (2026-04-20)
Prepares the package for a broader (primarily English-speaking) audience on Reddit, Hacker News, and Anthropic Discord. No breaking API changes.
Bilingualized
SKILL.md(auto-invocation skill). The bundled skill thatlinksee-memory-install-skillcopies into~/.claude/skills/linksee-memory/SKILL.mdwas Japanese-first; it is now English-primary with Japanese trigger phrases preserved inline. English speakers now get the skill firing on natural English phrases ("how did we solve this before?", "same error again", "remember this") in addition to the existing JP triggers.Install-skill CLI output is bilingual: example test phrases shown after installation include both English and Japanese.
Session-extractor EN coverage (
linksee-memory-import): expanded regex patterns for decisions, failures, and caveats so English Claude Code session logs get auto-tagged correctly. Additions includelet's go,pivot,switch to,settled on,approved,doesn't work,stuck,same error again,hit an error,debug,broke,revert.Clearer caveat-forget error hint: the previous message said "lower importance below 0.9 first, then forget" which was misleading โ caveat-layer memories are permanently protected regardless of importance. The hint now correctly distinguishes layer-protection from pin-protection.
README rework for launch readiness: added a "See it in action" before/after scenario, ASCII 6-layer diagram, MCP Official Registry + Glama score badges, landing-page link, and an 8-item FAQ covering questions that surface during public launches.
Internal: SKILL.md now documents pairing with KanseiLink skill as an English workflow example.
No code changes to the MCP protocol surface; all existing MCP clients continue to work unchanged.
v0.1.1 โ Pin threshold tweak (2026-04-19)
Based on real-world feedback that importance=0.95 memories were not
being treated as pinned despite intent.
Pin threshold lowered from
>= 1.0to>= 0.9. Memories withimportance >= 0.9are now exempt from the auto-forget sweep and surfacepinned: trueinrecallandrememberresponses. This matches the natural mental model ("0.9 = high importance = should survive cleanup") without requiring exact1.0.All existing memories with
importance >= 0.9(including older ones set to0.9or0.95) become pinned automatically โ no migration needed.Updated tool descriptions and error messages to reflect the new threshold.
v0.1.0 โ Major UX update (2026-04-18)
Based on one week of dogfooding, here's what changed:
New tools
update_memoryโ atomic edit with preservedmemory_id. Solves the "forget+remember breaks session_file_edits links" bug.list_entitiesโ fast "what do I know about?" primitive for session init. Supportskind/min_memoriesfilters and returns layer breakdown.npx linksee-memory-statsโ local DB summary CLI.
recall enhancements
match_reasonsarray on each memory: e.g.["content_match_fts", "heat:hot", "pinned"].score_breakdownwith per-dimension scores (relevance / heat / momentum / importance).Pagination via
offset/has_more/stopped_by.limitparameter (hard cap, complementsmax_tokensbudget).bandfilter to request only hot/warm/cold/frozen memories.mark_accessed=falsefor preview queries that shouldn't bump heat.Layer aliases:
decisionsโlearning,warningsโcaveat,howโimplementation, etc.Fix: opportunistic refresh of stale entity momentum scores. Entities recalled >1 h after last remember() no longer return stale momentum.
remember enhancements
Quality check: rejects pasted assistant output / CI logs / stack traces unless
force=true.importance=1.0now implicitly pins the memory (survives auto-forget).Layer aliases accepted.
forget changes
Pinned memories (importance=1.0) now preserved alongside caveat-layer memories.
Clear error response when attempting to delete a protected or missing memory.
dry-run now includes
sample_ids_to_drop.
consolidate changes
dry_run: truepreview mode โ reports cluster count + candidates without writing.
Infra
Fixed fresh-DB migration bug (was querying
metatable before it existed).Bumped to Node 20+ for structured language feature usage.
All changes are backward compatible โ existing integrations continue to work. Server.ts version banner now reports v0.1.0.
Older versions
See GitHub Releases.
License
MIT โ Synapse Arrows PTE. LTD.
Maintenance
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/michielinksee/linksee-memory'
If you have feedback or need assistance with the MCP directory API, please join our Discord server