memory-mcp
Exposes Prometheus metrics at the /metrics endpoint for monitoring server performance and usage.
Provides persistent, versioned memory storage using Redis or Valkey as the backend, enabling AI agents to store and retrieve memory entries.
Allows using Upstash Redis as the backend for persistent memory storage, supporting TLS connections.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@memory-mcpstore the decision to use Valkey for caching"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
memory-mcp
Persistent, searchable, versioned memory for AI agents — backed by Valkey (Redis-compatible), exposed as an MCP server over HTTP.
Works with any MCP-compatible agent: Claude Code, Cursor, VS Code, and others.
What it does
Stores named memory entries with tags, types, and project scopes
Tag-intersection search, type/project filtering, and substring search
Hit tracking (entries accessed more float to the top)
Full version history with rollback
Prometheus metrics endpoint
Optional bearer token auth
Quick start
cp .env.example .env
# Optional: set MEMORY_MCP_AUTH_TOKEN in .env (see Auth section)
docker compose up -dThis pulls the pre-built image from GHCR. The MCP server is now available at http://127.0.0.1:3106/mcp.
To build locally instead:
docker compose build
docker compose up -dUsing an existing Redis or Valkey
By default docker compose up -d starts a bundled Valkey container. To connect to an existing Redis or Valkey instance instead, set VALKEY_URL and start only the memory-mcp service:
# .env
VALKEY_URL=redis://your-host:6379
docker compose up -d memory-mcpAny Redis-compatible server (Redis 6+, Valkey, KeyDB, Upstash via rediss://, etc.) works. The server uses only basic data structures: hashes, lists, and sets.
Agent setup
Copy AGENTS.md from this repo into your project root. It tells your agent how to use the memory tools, what to store, and when.
Then register the MCP server with your agent client:
Claude Code
# Without auth
claude mcp add memory --transport http http://127.0.0.1:3106/mcp
# With auth
claude mcp add memory --transport http http://127.0.0.1:3106/mcp \
--header "Authorization: Bearer your-token"Or add manually to ~/.claude.json:
{
"mcpServers": {
"memory": {
"type": "http",
"url": "http://127.0.0.1:3106/mcp",
"headers": { "Authorization": "Bearer your-token" }
}
}
}Cursor
Add to ~/.cursor/mcp.json (global) or .cursor/mcp.json (project):
{
"mcpServers": {
"memory": {
"url": "http://127.0.0.1:3106/mcp",
"headers": { "Authorization": "Bearer your-token" }
}
}
}VS Code (GitHub Copilot, MCP extension)
Add to .vscode/mcp.json in your project:
{
"servers": {
"memory": {
"type": "http",
"url": "http://127.0.0.1:3106/mcp",
"headers": { "Authorization": "Bearer your-token" }
}
}
}Omit the headers / Authorization line in any config if you are not using auth.
Configuration
Copy .env.example to .env and edit as needed.
Variable | Default | Description |
|
| Interface to bind on. Use |
|
| Port exposed on the host |
| (empty) | Bearer token for |
|
| Soft cap — warns on write when exceeded |
|
| Max version snapshots per entry |
|
| Container memory cap |
|
| Valkey image to use |
Auth
By default the server runs unauthenticated. This is safe when bound to loopback (127.0.0.1) and accessed only from the local machine.
To enable auth:
# Generate a token
openssl rand -hex 32
# Add to .env
MEMORY_MCP_AUTH_TOKEN=your-generated-token
docker compose up -dAll requests to POST /mcp must then include:
Authorization: Bearer <token>GET /health and GET /metrics are always unauthenticated.
Available tools
Tool | Description |
| Search by tags (intersection), type, project, or text substring |
| Fetch one entry by ID (increments hit counter) |
| Create or update an entry (versioned on every write) |
| List entries with optional type/project filter |
| Delete an entry (tombstone version written first) |
| View version history for an entry |
| Restore an entry to a previous version |
| Surface zero-hit stale entries for review (read-only) |
Memory types
pattern, decision, reference, feedback, incident, project, entity, state
Endpoints
Method | Path | Auth | Description |
|
| if configured | MCP JSON-RPC endpoint |
|
| none | Health check |
|
| none | Prometheus metrics |
Data model
Each entry is stored as a Redis hash at mem:<id>:
Field | Description |
| Short descriptive title |
| Full content |
| Entry type |
| Comma-separated tag list |
| Who wrote it |
| Project scope (empty = cross-project) |
| ISO date of creation |
| ISO date of last update |
| Times retrieved via |
| Expiry in seconds (optional) |
Version history is stored in a Redis list at memver:<id> (newest-first, capped at MAX_VERSIONS_PER_ENTRY).
Tag, type, and project indexes are Redis sets (tag:<name>, type:<name>, project:<name>).
License
MIT
This server cannot be installed
Maintenance
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/joshdougall/memory-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server