ai-memory
_
__ _(_) _ __ ___ ___ _ __ ___ ___ _ __ _ _
/ _` | |___ | '_ ` _ \ / _ \ '_ ` _ \ / _ \| '__| | | |
| (_| | |___| | | | | | | __/ | | | | | (_) | | | |_| |
\__,_|_| |_| |_| |_|\___|_| |_| |_|\___/|_| \__, |
universal AI memory |___/
ai-memory is a persistent memory system for AI assistants. It works with any AI that supports MCP -- Claude, ChatGPT, Grok, Llama, and more. It stores what your AI learns in a local SQLite database, ranks memories by relevance when recalling, and auto-promotes important knowledge to permanent storage. Install it once, and every AI assistant you use remembers your architecture, your preferences, your corrections -- forever.
Zero token cost until recall. Unlike built-in memory systems (Claude Code auto-memory, ChatGPT memory) that load your entire memory into every conversation -- burning tokens and money on every message -- ai-memory uses zero context tokens until the AI explicitly calls memory_recall. Only relevant memories come back, ranked by a 6-factor scoring algorithm. TOON format (Token-Oriented Object Notation) cuts response tokens by another 40-60% by eliminating repeated field names -- 3 memories in JSON = 1,600 bytes; in TOON = 626 bytes (61% smaller); in TOON compact = 336 bytes (79% smaller). For Claude Code users: disable auto-memory ("autoMemoryEnabled": false in settings.json) and replace it with ai-memory to stop paying for 200+ lines of memory context on every single message.
Compatible AI Platforms
ai-memory integrates with any AI platform that supports the Model Context Protocol (MCP). MCP is the universal standard for connecting AI assistants to external tools and data sources.
Platform | Integration Method | Config Format | Status |
Claude Code (Anthropic) | MCP stdio | JSON ( | Fully supported |
Codex CLI (OpenAI) | MCP stdio | TOML ( | Fully supported |
Gemini CLI (Google) | MCP stdio | JSON ( | Fully supported |
Grok (xAI) | MCP remote HTTPS | API-level | Fully supported |
Cursor IDE | MCP stdio | JSON ( | Fully supported |
Windsurf (Codeium) | MCP stdio | JSON ( | Fully supported |
Continue.dev | MCP stdio | YAML ( | Fully supported |
Llama Stack (META) | MCP remote HTTP | YAML / Python SDK | Fully supported |
Any MCP client | MCP stdio or HTTP | Varies | Universal |
MCP is the primary integration layer. For AI platforms that do not yet support MCP natively, the HTTP API (20 endpoints on localhost) and the CLI (25 commands) provide universal access -- any AI, script, or automation that can make HTTP calls or run shell commands can use ai-memory.
Install in 60 Seconds
Pre-built binaries require no dependencies. Building from source needs Rust and a C compiler.
Fastest: Pre-built binary (no Rust required)
# macOS / Linux
curl -fsSL https://raw.githubusercontent.com/alphaonedev/ai-memory-mcp/main/install.sh | sh
# Windows (PowerShell)
irm https://raw.githubusercontent.com/alphaonedev/ai-memory-mcp/main/install.ps1 | iexStep 1: Install Rust (skip if using pre-built binaries)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | shFollow the prompts, then restart your terminal (or run source ~/.cargo/env).
Step 2: From source (requires Rust)
cargo install --git https://github.com/alphaonedev/ai-memory-mcp.gitThis compiles the binary and puts it in your PATH. It takes a minute or two.
Build dependencies for source builds:
Ubuntu/Debian:
sudo apt-get install build-essential pkg-configFedora/RHEL:
sudo dnf install gcc pkg-config
Step 3: Connect your AI
Configuration varies by platform. Find yours below:
Claude Code supports three MCP configuration scopes:
Scope | File | Applies to |
User (global) |
| All projects on your machine |
Project (shared) |
| Everyone on the project |
Local (private) |
| One project, just you |
User scope (recommended — works everywhere):
Add the mcpServers key to ~/.claude.json (macOS/Linux) or %USERPROFILE%\.claude.json (Windows):
{
"mcpServers": {
"memory": {
"command": "ai-memory",
"args": ["--db", "~/.claude/ai-memory.db", "mcp", "--tier", "semantic"]
}
}
}Note:
~/.claude.jsonlikely already exists with other settings. Merge themcpServerskey into the existing file — do not overwrite it.
Project scope (shared with team):
Create .mcp.json in your project root:
{
"mcpServers": {
"memory": {
"command": "ai-memory",
"args": ["--db", "~/.claude/ai-memory.db", "mcp", "--tier", "semantic"]
}
}
}Windows paths: Use forward slashes or escaped backslashes in
--db. Example:"--db", "C:/Users/YourName/.claude/ai-memory.db".
Tier flag: The
--tierflag selects the feature tier:keyword,semantic(default),smart, orautonomous. Smart and autonomous tiers require Ollama running locally. The--tierflag must be passed in the args — theconfig.tomltier setting is not used when the MCP server is launched by an AI client.
Important: MCP servers are not configured in
settings.jsonorsettings.local.json— those files do not supportmcpServers.
Add to ~/.codex/config.toml (global) or .codex/config.toml (project). Windows: %USERPROFILE%\.codex\config.toml. Override with CODEX_HOME env var.
[mcp_servers.memory]
command = "ai-memory"
args = ["--db", "~/.local/share/ai-memory/memories.db", "mcp", "--tier", "semantic"]
enabled = trueOr add via CLI: codex mcp add memory -- ai-memory --db ~/.local/share/ai-memory/memories.db mcp --tier semantic
Notes: Codex uses TOML format with underscored key
mcp_servers(not camelCase, not hyphenated). Supportsenv(key/value pairs),env_vars(list to forward),enabled_tools,disabled_tools,startup_timeout_sec,tool_timeout_sec. Use/mcpin the TUI to view server status. See Codex MCP docs.
Add to ~/.gemini/settings.json (user) or .gemini/settings.json (project). Windows: %USERPROFILE%\.gemini\settings.json.
{
"mcpServers": {
"memory": {
"command": "ai-memory",
"args": ["--db", "~/.local/share/ai-memory/memories.db", "mcp", "--tier", "semantic"],
"timeout": 30000
}
}
}Or add via CLI: gemini mcp add memory ai-memory -- --db ~/.local/share/ai-memory/memories.db mcp --tier semantic
Notes: Avoid underscores in server names (use hyphens). Tool names are auto-prefixed as
mcp_memory_<toolName>. Env vars in theenvfield support$VAR/${VAR}(all platforms) and%VAR%(Windows). Gemini sanitizes sensitive patterns from inherited env unless explicitly declared. Add"trust": trueto skip confirmation prompts. CLI management:gemini mcp list/remove/enable/disable. See Gemini CLI MCP docs.
Add to ~/.cursor/mcp.json (global) or .cursor/mcp.json (project). Windows: %USERPROFILE%\.cursor\mcp.json. Project config overrides global for same-named servers.
{
"mcpServers": {
"memory": {
"command": "ai-memory",
"args": ["--db", "~/.local/share/ai-memory/memories.db", "mcp", "--tier", "semantic"]
}
}
}Notes: Restart Cursor after editing
mcp.json. Verify server status in Settings > Tools & MCP (green dot = connected). Supportsenv,envFile, and${env:VAR_NAME}interpolation (env var interpolation can be unreliable for shell profile variables — useenvFileas workaround). ~40 tool limit across all MCP servers. See Cursor MCP docs.
Add to ~/.codeium/windsurf/mcp_config.json (global only — no project-level scope). Windows: %USERPROFILE%\.codeium\windsurf\mcp_config.json.
{
"mcpServers": {
"memory": {
"command": "ai-memory",
"args": ["--db", "~/.local/share/ai-memory/memories.db", "mcp", "--tier", "semantic"]
}
}
}Notes: Supports
${env:VAR_NAME}interpolation incommand,args,env,serverUrl,url, andheaders. 100 tool limit across all MCP servers. Can also add via MCP Marketplace or Settings > Cascade > MCP Servers. See Windsurf MCP docs.
Add to ~/.continue/config.yaml (user) or .continue/mcpServers/ directory in project root (per-server YAML/JSON files). Windows: %USERPROFILE%\.continue\config.yaml.
mcpServers:
- name: memory
command: ai-memory
args:
- "--db"
- "~/.local/share/ai-memory/memories.db"
- "mcp"
- "--tier"
- "semantic"Notes: MCP tools only work in agent mode. Supports
${{ secrets.SECRET_NAME }}for secret interpolation. Project-level.continue/mcpServers/directory auto-detects JSON configs from other tools (Claude Code, Cursor, etc.). See Continue MCP docs.
Grok connects to MCP servers over HTTPS (remote only, no stdio). No config file — servers are specified per API request.
ai-memory serve --host 127.0.0.1 --port 9077
# Expose via HTTPS reverse proxy (nginx, caddy, cloudflare tunnel, etc.)Then add the MCP server to your Grok API call:
curl https://api.x.ai/v1/responses \
-H "Authorization: Bearer $XAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "grok-3",
"tools": [{
"type": "mcp",
"server_url": "https://your-server.example.com/mcp",
"server_label": "memory",
"server_description": "Persistent AI memory with recall and search",
"allowed_tools": ["memory_store", "memory_recall", "memory_search"]
}],
"input": "What do you remember about our project?"
}'Requirements: HTTPS required.
server_labelis required. Supports Streamable HTTP and SSE transports. Optional:allowed_tools,authorization,headers. Works with xAI SDK, OpenAI-compatible Responses API, and Voice Agent API. See xAI Remote MCP docs.
Llama Stack registers MCP servers as toolgroups. No standardized config file path — deployment-specific.
ai-memory serve --host 127.0.0.1 --port 9077Python SDK:
client.toolgroups.register(
provider_id="model-context-protocol",
toolgroup_id="mcp::memory",
mcp_endpoint={"uri": "http://localhost:9077/sse"}
)Or declaratively in run.yaml:
tool_groups:
- toolgroup_id: mcp::memory
provider_id: model-context-protocol
mcp_endpoint:
uri: "http://localhost:9077/sse"Notes: Supports
${env.VAR_NAME}interpolation in run.yaml. Transport is migrating from SSE to Streamable HTTP. See Llama Stack Tools docs.
ai-memory speaks MCP over stdio (JSON-RPC 2.0). Point your client at:
command: ai-memory
args: ["--db", "/path/to/ai-memory.db", "mcp"]For HTTP-only clients, start the REST API:
ai-memory serve
# 20 endpoints at http://127.0.0.1:9077/api/v1/Step 4: Done. Test it.
Restart your AI assistant. If using MCP, it now has 17 memory tools. Ask it: "Store a memory that my favorite language is Rust." Then in a new conversation, ask: "What is my favorite language?" It will remember.
What Does It Do?
AI assistants forget everything between conversations. ai-memory fixes that.
It runs as an MCP (Model Context Protocol) tool server -- a background process that your AI talks to natively. When your AI learns something important, it stores it. When it needs context, it recalls relevant memories ranked by a 6-factor scoring algorithm. Memories live in three tiers:
Short-term (6 hours) -- throwaway context like current debugging state
Mid-term (7 days) -- working knowledge like sprint goals and recent decisions
Long-term (permanent) -- architecture, user preferences, hard-won lessons
Memories that keep getting accessed automatically promote from mid to long-term. Each recall extends the TTL. Priority increases with usage. The system is self-curating.
Beyond MCP, ai-memory also exposes a full HTTP REST API (20 endpoints on port 9077) and a complete CLI (25 commands) for direct interaction, scripting, and integration with any AI platform or tool.
Features
Core
MCP tool server -- 17 tools over stdio JSON-RPC, compatible with any MCP client
Three-tier memory -- short (6h TTL), mid (7d TTL), long (permanent)
Full-text search -- SQLite FTS5 with ranked retrieval
Hybrid recall -- FTS5 keyword + cosine similarity with fixed 0.6 semantic / 0.4 keyword (60/40) blend weights
6-factor recall scoring -- FTS relevance + priority + access frequency + confidence + tier boost + recency decay
Auto-promotion -- memories accessed 5+ times promote from mid to long
TTL extension -- each recall extends expiry (short +1h, mid +1d)
Priority reinforcement -- +1 every 10 accesses (max 10)
Contradiction detection -- warns when storing memories that conflict with existing ones
Deduplication -- upsert on title+namespace, tier never downgrades
Confidence scoring -- 0.0-1.0 certainty factored into ranking
Organization
Namespaces -- isolate memories per project (auto-detected from git remote)
Memory linking -- typed relations: related_to, supersedes, contradicts, derived_from
Consolidation -- merge multiple memories into a single long-term summary
Auto-consolidation -- group by namespace+tag, auto-merge groups above threshold
Contradiction resolution -- mark one memory as superseding another, demote the loser
Forget by pattern -- bulk delete by namespace + FTS pattern + tier
Source tracking -- tracks origin: user, claude, hook, api, cli, import, consolidation, system
Tagging -- comma-separated tags with filter support
Interfaces
20 HTTP endpoints -- full REST API on 127.0.0.1:9077 (works with any AI or tool)
25 CLI commands -- complete CLI with identical capabilities
17 MCP tools -- native integration for any MCP-compatible AI
Interactive REPL shell -- recall, search, list, get, stats, namespaces, delete with color output
JSON output --
--jsonflag on all CLI commands
Operations
Multi-node sync -- pull, push, or bidirectional merge between database files
Import/Export -- full JSON roundtrip preserving memory links
Garbage collection -- automatic background expiry every 30 minutes
Graceful shutdown -- SIGTERM/SIGINT checkpoints WAL for clean exit
Deep health check -- verifies DB accessibility and FTS5 integrity
Shell completions -- bash, zsh, fish
Man page --
ai-memory mangenerates roff to stdoutTime filters --
--since/--untilon list and searchHuman-readable ages -- "2h ago", "3d ago" in CLI output
Color CLI output -- ANSI tier labels (red/yellow/green), priority bars, bold titles, cyan namespaces
Quality
161 tests -- 118 unit tests across all 15 modules (db 29, mcp 12, config 9, main 9, mine 9, validate 8, reranker 7, color 6, errors 6, models 6, toon 6, embeddings 5, hnsw 4, llm 2) + 43 integration tests. 15/15 modules have unit tests — 95%+ coverage.
LongMemEval benchmark -- 97.8% R@5 (489/500), 99.0% R@10, 99.8% R@20 on ICLR 2025 LongMemEval-S dataset. 499/500 at R@20. Pure FTS5 keyword achieves 97.0% R@5 in 2.2 seconds (232 q/s). LLM query expansion pushes to 97.8% R@5. Zero cloud API costs. See benchmark details.
MCP Prompts --
recall-firstandmemory-workflowprompts teach AI clients to use memory proactivelyTOON-default -- recall/list/search responses use TOON compact by default (79% smaller than JSON)
Criterion benchmarks -- insert, recall, search at 1K scale
GitHub Actions CI/CD -- fmt, clippy, test, build on Ubuntu + macOS, release on tag
ML and LLM Dependencies (semantic tier+)
candle-core, candle-nn, candle-transformers -- Hugging Face Candle ML framework for native Rust inference
hf-hub -- download models from Hugging Face Hub
tokenizers -- Hugging Face tokenizers for text preprocessing
instant-distance -- approximate nearest neighbor search
reqwest -- HTTP client for Ollama API communication (smart/autonomous tiers)
Architecture
+-------------+ +-------------+ +-------------+ +-------------+
| Claude Code | | ChatGPT | | Grok | | Llama |
| (Anthropic)| | (OpenAI) | | (xAI) | | (META) |
+------+------+ +------+------+ +------+------+ +------+------+
| | | |
+--------+--------+--------+--------+--------+--------+
| | |
+-----v------+ +------v--------+ +----v----------+
| CLI | | MCP Server | | HTTP API |
| 25 commands | | stdio JSON-RPC| | 127.0.0.1:9077|
+-----+------+ +------+--------+ +----+----------+
| | |
+--------+--------+--------+--------+
| |
+-----v------+ +-----v------+
| Validation | | Errors |
| validate.rs| | errors.rs |
+-----+------+ +-----+------+
| |
+--------+--------+
|
+---------v---------+
| SQLite + FTS5 |
| WAL mode |
+---+-----+-----+---+
| | |
+----+ +--+--+ +----+
|short| | mid | | long|
|6h | | 7d | | inf |
+-----+ +-----+ +-----+
| ^
| | auto-promote
+-----+ (5+ accesses)
Embedding Pipeline (semantic tier+):
+--------------------------------------------------+
| Candle ML Framework (candle-core, candle-nn) |
| all-MiniLM-L6-v2 model (384-dim vectors) |
| Vectors stored as BLOBs in SQLite |
| Hybrid recall: FTS5 keyword + cosine similarity |
+--------------------------------------------------+
LLM Pipeline (smart/autonomous tier):
+--------------------------------------------------+
| Ollama (local) |
| smart: Gemma 4 E2B (query expansion, tagging) |
| autonomous: Gemma 4 E4B + cross-encoder rerank |
+--------------------------------------------------+Integration Methods
MCP (Primary -- for MCP-compatible AI platforms)
MCP is the recommended integration. Your AI gets 17 native memory tools with zero glue code. Configure the MCP server in your AI platform's config:
{
"mcpServers": {
"memory": {
"command": "ai-memory",
"args": ["--db", "~/.claude/ai-memory.db", "mcp"]
}
}
}HTTP API (Universal -- for any AI or tool)
Start the HTTP server for REST API access. Any AI, script, or automation that can make HTTP calls can use this:
ai-memory serve
# 20 endpoints at http://127.0.0.1:9077/api/v1/CLI (Universal -- for scripting and direct use)
The CLI works standalone or as a building block for AI integrations that run shell commands:
ai-memory store --tier long --title "Architecture decision" --content "We use PostgreSQL"
ai-memory recall "database choice"
ai-memory search "PostgreSQL"Feature Tiers
ai-memory supports 4 feature tiers, selected at startup with ai-memory mcp --tier <tier>. Higher tiers add ML capabilities at the cost of disk and RAM:
Tier | Recall Method | Extra Capabilities | Approx. Overhead |
keyword | FTS5 only | Baseline 13 tools | 0 MB |
semantic | FTS5 + cosine similarity (hybrid) | MiniLM-L6-v2 embeddings (384-dim), HNSW index, 14 tools | ~256 MB |
smart | Hybrid + LLM query expansion | + nomic-embed-text (768-dim) + Gemma 4 E2B via Ollama: | ~1 GB |
autonomous | Hybrid + LLM expansion + cross-encoder reranking | + Gemma 4 E4B via Ollama, neural cross-encoder (ms-marco-MiniLM), memory reflection, 17 tools | ~4 GB |
Capability Matrix
Every capability mapped to its minimum tier. Each tier includes all capabilities from the tiers below it.
Capability | keyword | semantic | smart | autonomous |
Search & Recall | ||||
FTS5 keyword search | Yes | Yes | Yes | Yes |
Semantic embedding (cosine similarity) | -- | Yes | Yes | Yes |
Hybrid recall (FTS5 + cosine, 60/40 semantic/keyword blend) | -- | Yes | Yes | Yes |
HNSW nearest-neighbor index | -- | Yes | Yes | Yes |
LLM query expansion ( | -- | -- | Yes | Yes |
Neural cross-encoder reranking | -- | -- | -- | Yes |
Memory Management | ||||
Store, update, delete, promote, link | Yes | Yes | Yes | Yes |
Manual consolidation | Yes | Yes | Yes | Yes |
Auto-consolidation (LLM summary) | -- | -- | Yes | Yes |
Auto-tagging ( | -- | -- | Yes | Yes |
Contradiction detection ( | -- | -- | Yes | Yes |
Autonomous memory reflection | -- | -- | -- | Yes |
Models | ||||
Embedding model | -- | MiniLM-L6-v2 (384d) | nomic-embed-text (768d) | nomic-embed-text (768d) |
LLM | -- | -- | gemma4:e2b (~7.2GB) | gemma4:e4b (~9.6GB) |
Resources | ||||
RAM | 0 MB | ~256 MB | ~1 GB | ~4 GB |
External dependencies | None | None | Ollama | Ollama |
MCP tools exposed | 13 | 14 | 17 | 17 |
Semantic tier (default) bundles the Candle ML framework and downloads the all-MiniLM-L6-v2 model on first run (~90 MB). Smart and autonomous tiers require Ollama running locally.
Tiers gate features, not models. The --tier flag controls which tools are exposed. The LLM model is independently configurable via llm_model in ~/.config/ai-memory/config.toml. For example, run autonomous tier (all 17 tools + reranker) with the faster e2b model:
# ~/.config/ai-memory/config.toml
tier = "autonomous" # all features enabled
llm_model = "gemma4:e2b" # faster model (46 tok/s vs 26 tok/s for e4b)The --tier flag must be passed in the MCP args -- the config.toml tier setting is not used when the server is launched by an AI client.
# Keyword (default)
ai-memory mcp
# Semantic -- hybrid recall with embeddings
ai-memory mcp --tier semantic
# Smart -- adds LLM-powered query expansion, auto-tagging, contradiction detection
ai-memory mcp --tier smart
# Autonomous -- adds cross-encoder reranking
ai-memory mcp --tier autonomousThe memory_capabilities tool reports the active tier, loaded models, and available capabilities at runtime.
MCP Tools
These 17 tools are available to any MCP-compatible AI when configured as an MCP server:
Tool | Description |
| Store a new memory (deduplicates by title+namespace, reports contradictions) |
| Recall memories relevant to a context (fuzzy OR search, ranked by 6 factors) |
| Search memories by exact keyword match (AND semantics) |
| List memories with optional filters (namespace, tier, tags, date range) |
| Get a specific memory by ID with its links |
| Update an existing memory by ID (partial update) |
| Delete a memory by ID |
| Promote a memory to long-term (permanent, clears expiry) |
| Bulk delete by pattern, namespace, or tier |
| Create a typed link between two memories |
| Get all links for a memory |
| Merge multiple memories into one long-term summary |
| Get memory store statistics |
| Report active feature tier, loaded models, and available capabilities |
| Use LLM to expand search query into related terms (smart+ tier) |
| Use LLM to auto-generate tags for a memory (smart+ tier) |
| Use LLM to check if two memories contradict (smart+ tier) |
HTTP API
20 endpoints on 127.0.0.1:9077. Start with ai-memory serve.
Method | Endpoint | Description |
GET |
| Health check (verifies DB + FTS5 integrity) |
GET |
| List memories (supports namespace, tier, tags, since, until, limit) |
POST |
| Create a memory |
POST |
| Bulk create memories (with limits) |
GET |
| Get a memory by ID |
PUT |
| Update a memory by ID |
DELETE |
| Delete a memory by ID |
POST |
| Promote a memory to long-term |
GET |
| AND keyword search |
GET |
| Recall by context (GET with query params) |
POST |
| Recall by context (POST with JSON body) |
POST |
| Bulk delete by pattern/namespace/tier |
POST |
| Consolidate memories into one |
POST |
| Create a link between memories |
GET |
| Get links for a memory |
GET |
| List all namespaces |
GET |
| Memory store statistics |
POST |
| Trigger garbage collection |
GET |
| Export all memories + links as JSON |
POST |
| Import memories + links from JSON |
CLI Commands
25 commands. Run ai-memory <command> --help for details on any command.
Command | Description |
| Run as MCP tool server over stdio (primary integration path) |
| Start the HTTP daemon on port 9077 |
| Store a new memory (deduplicates by title+namespace) |
| Update an existing memory by ID |
| Fuzzy OR search with ranked results + auto-touch (supports |
| AND search for precise keyword matches. Max 200 items per request. |
| Retrieve a single memory by ID (includes links) |
| Browse memories with filters (namespace, tier, tags, date range). Max 200 items per request. |
| Delete a memory by ID |
| Promote a memory to long-term (clears expiry) |
| Bulk delete by pattern + namespace + tier |
| Link two memories (related_to, supersedes, contradicts, derived_from) |
| Merge multiple memories into one long-term summary |
| Resolve a contradiction: mark winner, demote loser |
| Interactive REPL with color output |
| Sync memories between two database files (pull/push/merge) |
| Group memories by namespace+tag, merge groups above threshold |
| Run garbage collection on expired memories |
| Overview of memory state (counts, tiers, namespaces, links, DB size) |
| List all namespaces with memory counts |
| Export all memories and links as JSON |
| Import memories and links from JSON (stdin) |
| Generate shell completions (bash, zsh, fish) |
| Generate roff man page to stdout |
| Import memories from historical conversations (Claude, ChatGPT, Slack exports) |
The top-level ai-memory binary also accepts global flags:
Flag | Description |
| Database path (default: |
| JSON output on all commands (machine-parseable output) |
The store subcommand accepts additional flags:
Flag | Description |
| Who created this memory (user, claude, hook, api, cli, import, consolidation, system). Default: |
| RFC3339 expiry timestamp |
| TTL in seconds (alternative to |
The mcp subcommand accepts an additional flag:
Flag | Description |
| Feature tier (default: |
Recall Scoring
Every recall query ranks memories by 6 factors:
score = (fts_relevance * -1)
+ (priority * 0.5)
+ (MIN(access_count, 50) * 0.1)
+ (confidence * 2.0)
+ tier_boost
+ recency_decayFactor | Weight | Notes |
FTS relevance | -1.0x | SQLite FTS5 rank (negative = better match) |
Priority | 0.5x | User-assigned 1-10 scale |
Access count | 0.1x | How often recalled (capped at 50 for scoring) |
Confidence | 2.0x | 0.0-1.0 certainty score |
Tier boost | +3.0 / +1.0 / +0.0 | long / mid / short |
Recency decay |
| Recent memories rank higher |
Memory Tiers
Tier | TTL | Use Case | Examples |
| 6 hours | Throwaway context | Current debugging state, temp variables, error traces |
| 7 days | Working knowledge | Sprint goals, recent decisions, current branch purpose |
| Permanent | Hard-won knowledge | Architecture, user preferences, corrections, conventions |
Automatic Behaviors
TTL extension on recall: short memories get +1 hour, mid memories get +1 day
Auto-promotion: mid-tier memories accessed 5+ times promote to long (expiry cleared)
Priority reinforcement: every 10 accesses, priority increases by 1 (capped at 10)
Contradiction detection: warns when a new memory conflicts with an existing one in the same namespace
Deduplication: upsert on title+namespace; tier never downgrades on update
Security
ai-memory includes hardening across all input paths:
Transaction safety -- all multi-step database operations use transactions; no partial writes on failure
FTS injection prevention -- user input is sanitized before reaching FTS5 queries; special characters are escaped
Error sanitization -- internal database paths and system details are stripped from error responses; clients see structured error types (NOT_FOUND, VALIDATION_FAILED, DATABASE_ERROR, CONFLICT)
Body size limits -- HTTP request bodies are capped at 50 MB via Axum's DefaultBodyLimit
Bulk operation limits -- bulk create endpoints enforce maximum batch sizes to prevent resource exhaustion
CORS -- permissive CORS layer enabled for localhost development workflows
Input validation -- every write path validates title length, content length, namespace format, source values, priority range (1-10), confidence range (0.0-1.0), tag format, tier values, relation types, and ID format
Link validation in sync -- all links are validated (both IDs, relation type, no self-links) before import during sync operations
Thread-safe color -- terminal color detection uses
AtomicBoolfor safe concurrent accessLocal-only HTTP -- the HTTP server binds to 127.0.0.1 by default; not exposed to the network
WAL mode -- SQLite Write-Ahead Logging for safe concurrent reads during writes
Documentation
Guide | Audience |
Getting it running (includes MCP setup for multiple AI platforms) | |
AI assistant users who want persistent memory | |
Building on or contributing to ai-memory | |
Deploying, monitoring, and troubleshooting | |
Visual overview with animated diagrams |
License
Copyright (c) 2026 AlphaOne LLC. All rights reserved.
Licensed under the MIT License.
THIS SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/alphaonedev/ai-memory-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server