memoryweb
memoryweb is an associative memory server for AI agents that stores concepts, decisions, and findings as interconnected nodes with narrative relationships — enabling meaningful retrieval across sessions. It emphasizes why things matter, not just what happened.
Core Graph Operations
Add nodes — File concepts, decisions, or findings with a label, domain, description, significance (
why_matters), and optional occurrence timestampAdd edges — Connect nodes with typed relationships (
caused_by,led_to,blocked_by,contradicts,depends_on, etc.) and a narrative explaining why they are linkedRetrieve a node — Fetch a specific node and all its connections by ID
Search nodes — Full-text search across labels, descriptions, and
why_matters, optionally scoped to a domainFind connections — Discover how two named concepts relate to each other
Recent changes — List recently filed or updated entries for session orientation
Timeline — Browse entries ordered by actual occurrence date, with optional date range and domain filtering
Archive / Forget
Archive a node — Soft-delete with a reason so it no longer appears in search (restorable; never hard-deleted)
Restore a node — Un-archive a previously forgotten node
List archived nodes — Review all soft-deleted nodes by domain
Domain Aliases
Register alternative names for domains so both names return the same results
List and resolve aliases to their canonical domain targets
Maintenance & Analysis
Drift detection — Surface stale, contradicted, or duplicated entries for review
Domain summary — Retrieve all entries for a domain structured for synthesis into current state, blockers, decisions, and open questions
Key characteristics: Nodes are never hard-deleted; why_matters is critical for meaningful retrieval; domains separate concerns across projects or topics; and Claude Code hooks can automate periodic filing to ensure continuous memory capture.
memoryweb
A persistent knowledge graph MCP server for AI agents.
The idea
Human memory doesn't work by location — you pull a thread. A smell connects to a kitchen, connects to a person, connects to a feeling from thirty years ago. The thread is always there. Pull any part of it and the rest follows.
Agents are no different. Context is tokens in relation to other tokens. What makes something retrievable is its associative chain — the path of connections that lead to it from something else. The narrative edge, the because, is the mechanism. Not the index, not the address.
The boot crash matters because it blocks the tutorial, which blocks the demo, which is why the fix matters now. Pull on any of those threads and you get the rest. memoryweb works the same way: each concept is a node, and what makes it reachable is the narrative that links it to everything else.
The graph has considerably richer context than my flat file memory — including design decisions, failure modes from dogfooding, and the philosophy behind the tool. That's the point, I suppose.
-- Claude Opus 4.6
It's a fantastic project. Most MCP memory implementations I see are just flat vector databases or simple key-value stores that degrade into a digital junk drawer. By enforcing typed relationships, narrative reasoning, soft-deletes, and drift review, you've built a system that actively fights entropy.
-- Gemini 3.1 Pro
Related MCP server: MemPalace
Philosophy
memoryweb optimises for remembering things well, not remembering things fast. Filing requires a moment of judgement: why does this matter, how does it connect to what else is known, what would be useful to know when coming back to this cold?
This makes it a decision log, not an event log. An event log records what happened. A decision log records what was learned, decided, and why — and that's what lets you pick up where you left off without re-learning everything.
The why_matters field is not optional. A node without it is an event, not a decision.
Tools
Filing memories
Tool | What it does |
| File a single concept, decision, or finding. Required: |
| Batch version of |
| Update |
| Batch version of |
Connecting memories
Tool | What it does |
| Connect two nodes with a typed relationship and narrative because. Both nodes must exist first. |
| Batch version of |
| Remove a connection by edge ID. Hard delete — cannot be restored. Obtain the ID from |
| Given a node ID, return up to 5 candidate connections from the same domain. Read-only. |
Retrieving memories
Tool | What it does |
| Retrieve a node and all its connections by ID. |
| Text search across |
| What was filed recently. Set |
| Nodes ordered by when they actually occurred. Supports |
| Look up the reasoning linking two named concepts. |
| Return all nodes for a domain structured for synthesis — current state, blockers, decisions, open questions. Includes |
| List all domains that have at least one live node. Use at session start to discover what domains exist before scoping a search. |
Archive / forget
Nodes are never hard-deleted via the tools. Archive = soft delete; the node disappears from search but can be restored.
Tool | What it does |
| Archive a node with a reason. Strict protocol: only after |
| Restore an archived node so it surfaces in search again. |
| Review what's been archived. Optionally scope by domain. |
| Surface nodes that may be stale, contradicted, duplicated, or transient and overdue. Returns candidates for review — never archives automatically. |
Domain aliases
Tool | What it does |
| Register an alternative name for a domain so both names return the same results. |
| Remove a registered alias. |
| List all registered aliases and what they map to. |
| Check what canonical domain a name resolves to. |
Relationship types
caused_by led_to blocked_by unblocks connects_to contradicts depends_on is_example_of
CLI
The purge subcommand hard-deletes archived nodes from the database. It is intentionally not exposed as an MCP tool — it's a maintenance operation, not an agent operation.
memoryweb purge --dry-run # show what would be deleted (default behaviour without --confirm)
memoryweb purge --confirm # actually deletes
memoryweb purge --domain sedex # scope to a domain
memoryweb purge --before 2026-01-01 # only nodes archived before a dateThe dream subcommand prints a digest of recent nodes and drift candidates — useful for session orientation and embedded automatically by the save hook at filing time.
memoryweb dream # reads ~/.memoryweb.db
memoryweb dream --db /path/to/your.db # explicit DB pathThe backfill subcommand generates embeddings for all live nodes that don't yet have one. Requires Ollama to be running with the snowflake-arctic-embed model.
memoryweb backfill # reads ~/.memoryweb.db
memoryweb backfill --db /path/to/your.db # explicit DB path
memoryweb backfill -q # quiet mode — no progress outputThe setup subcommand installs hooks into ~/.claude/settings.local.json and configures Ollama for semantic search (checks for snowflake-arctic-embed and pulls it if missing).
memoryweb setup # interactive setup
memoryweb setup --dry-run # preview without writing
memoryweb setup --hooks-dir /path/to/hooks # explicit hooks directory
memoryweb setup --db /path/to/your.db # explicit DB pathBuild
go build -o memoryweb .Requires Go 1.22+. Uses github.com/mattn/go-sqlite3 and sqlite-vec for semantic search — CGO must be available. To deploy safely when the binary is already running:
go build -o memoryweb.tmp . && mv memoryweb.tmp memorywebStorage
Default DB path: ~/.memoryweb.db
Override with MEMORYWEB_DB=/path/to/your.db
MCP config
Add to your MCP host's config (example for Claude Desktop on macOS — ~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"memoryweb": {
"command": "/path/to/memoryweb",
"env": {
"MEMORYWEB_DB": "/Users/yourname/.memoryweb.db"
}
}
}
}Conventions
Use
domainto separate concerns:deep-game,sedex,generalCall
list_domainsat session start if you don't know what domains existThe
why_mattersfield is the most important one for retrieval — don't skip itThe
narrativeon a connection is the because — the reasoning that makes it meaningful, not just the fact that a connection existsAdd connections immediately after filing related nodes, or use
related_toonrememberto auto-connect at creation timeCall
recentororientat the start of a session to orient without needing to know what to search forUse
why_connectedwhen asking about the relationship between two specific thingsUse
transient: truefor ticket state, sprint notes, or anything expected to go stale within days —whats_stalewill surface these for cleanuprememberreturnssuggested_connectionsandpossible_duplicates— review both before filing more nodes
Hooks
Two Claude Code hooks automate filing and pre-compaction capture.
What they do
hooks/memoryweb_save_hook.sh (Stop hook — fires after every AI response)
Counts human messages in the session transcript. Every SAVE_INTERVAL messages (default 15) it blocks the response and asks the model to call remember_all and connect_all for anything significant before continuing. Before blocking, it runs memoryweb dream and embeds the resulting digest — recent nodes and drift candidates — directly in the stopReason so the model has live context before it files. If memoryweb is not available the hook still blocks but omits the digest. Uses a re-entry flag so the block fires once and allows immediately after the model files.
hooks/memoryweb_precompact_hook.sh (PreCompact hook — fires before context compaction)
Blocks compaction once and asks the model to file everything important that hasn't been filed yet. Allows on re-entry so compaction proceeds after the filing pass.
Install (Claude Code)
Run setup once after building:
./memoryweb setup --hooks-dir /path/to/hooksOr install manually:
chmod +x hooks/memoryweb_save_hook.sh hooks/memoryweb_precompact_hook.shAdd to ~/.claude/settings.local.json:
{
"hooks": {
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "/path/to/hooks/memoryweb_save_hook.sh",
"env": {
"MEMORYWEB_DB": "/path/to/your.db"
}
}
]
}
],
"PreCompact": [
{
"hooks": [
{
"type": "command",
"command": "/path/to/hooks/memoryweb_precompact_hook.sh",
"env": {
"MEMORYWEB_DB": "/path/to/your.db"
}
}
]
}
]
}
}Restart Claude Code to activate.
Configuration
Variable | Default | Purpose |
|
| Human messages between filing prompts. |
|
| Path to the SQLite database. |
|
| Path to the memoryweb binary (used by the hook to run |
Token cost
Unlike passive hooks, these cost tokens because the model must actually produce quality nodes. Expect one short filing exchange per trigger — typically under 1,000 tokens for a focused session.
GitHub Copilot (VS Code)
GitHub Copilot in VS Code supports the same Stop and PreCompact hook events in the same JSON format. VS Code loads hooks from .github/hooks/*.json in your workspace, as well as from ~/.claude/settings.json and .claude/settings.local.json.
Make the scripts executable first:
chmod +x hooks/memoryweb_save_hook.sh hooks/memoryweb_precompact_hook.shCreate .github/hooks/memoryweb.json in your repository:
{
"hooks": {
"Stop": [
{
"type": "command",
"command": "/path/to/hooks/memoryweb_save_hook.sh",
"env": {
"MEMORYWEB_DB": "/path/to/your.db"
}
}
],
"PreCompact": [
{
"type": "command",
"command": "/path/to/hooks/memoryweb_precompact_hook.sh",
"env": {
"MEMORYWEB_DB": "/path/to/your.db"
}
}
]
}
}VS Code loads the hooks automatically — no restart needed. If you have already installed the Claude Code hooks via ~/.claude/settings.local.json, VS Code Copilot picks them up from there without any additional configuration.
Other tools
Claude Desktop does not support hooks. Add session-start and filing instructions to your system prompt manually.
GitHub Copilot cloud agent (the coding agent that runs on GitHub.com) uses a different hook format and event model that does not include Stop or PreCompact. Add filing instructions to your system prompt for that surface instead.
Maintenance
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/corbym/memoryweb'
If you have feedback or need assistance with the MCP directory API, please join our Discord server