Supports cloud storage sync via Cloudflare R2 for storing and syncing the SQLite database, with support for encryption and compression.
Renders Mermaid diagrams within the interactive knowledge graph visualization when viewing memory content.
Supports S3-compatible storage sync via MinIO for storing and syncing the SQLite database.
Provides a Telescope plugin for browsing and searching memories directly in Neovim with fuzzy search and preview capabilities.
Supports OpenAI embeddings for high-quality semantic search and cross-referencing of memories using text-embedding models.
Uses SQLite as the persistent storage backend for memories, with support for local storage and cloud sync.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@memorasearch for meeting notes about the Q3 project using semantic search"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Features
πΎ Persistent Storage - SQLite-backed database with optional cloud sync (S3, GCS, Azure)
π Semantic Search - Vector embeddings (TF-IDF, sentence-transformers, or OpenAI)
π€ LLM Deduplication - Find and merge duplicate memories with AI-powered comparison
β‘ Memory Automation - Structured tools for TODOs, issues, and section placeholders
π Memory Linking - Typed edges, importance boosting, and cluster detection
π‘ Event Notifications - Poll-based system for inter-agent communication
π― Advanced Queries - Full-text search, date ranges, tag filters (AND/OR/NOT)
π Cross-references - Auto-linked related memories based on similarity
π Hierarchical Organization - Explore memories by section/subsection
π¦ Export/Import - Backup and restore with merge strategies
πΈοΈ Knowledge Graph - Interactive HTML visualization with Mermaid diagram rendering
π Live Graph Server - Auto-starts HTTP server for remote access via SSH
π Statistics & Analytics - Tag usage, trends, and connection insights
Install
Includes cloud storage (S3/R2) and OpenAI embeddings out of the box.
The server runs automatically when configured in Claude Code. Manual invocation:
Claude Code
Add to .mcp.json in your project root:
Local DB:
Cloud DB (Cloudflare D1) - Recommended:
With D1, use --no-graph to disable the local visualization server. Instead, use the hosted graph at your Cloudflare Pages URL (see Cloud Graph).
Cloud DB (S3/R2) - Sync mode:
Codex CLI
Add to ~/.codex/config.toml:
Variable | Description |
| Local SQLite database path (default: |
| Storage URI: |
| API token for D1 database access (required for |
| Encrypt database before uploading to cloud ( |
| Compress database before uploading to cloud ( |
| Local cache directory for cloud-synced database |
| Allow any tag without validation against allowlist ( |
| Path to file containing allowed tags (one per line) |
| Comma-separated list of allowed tags |
| Port for the knowledge graph visualization server (default: |
| Embedding backend: |
| Model for sentence-transformers (default: |
| API key for OpenAI embeddings and LLM deduplication |
| Base URL for OpenAI-compatible APIs (OpenRouter, Azure, etc.) |
| OpenAI embedding model (default: |
| Enable LLM-powered deduplication comparison ( |
| Model for deduplication comparison (default: |
| AWS credentials profile from |
| S3-compatible endpoint for R2/MinIO |
| Public domain for R2 image URLs |
Memora supports three embedding backends:
Backend | Install | Quality | Speed |
| Included | High quality | API latency |
|
| Good, runs offline | Medium |
| Included | Basic keyword matching | Fast |
Automatic: Embeddings and cross-references are computed automatically when you memory_create, memory_update, or memory_create_batch.
Manual rebuild required when:
Changing
MEMORA_EMBEDDING_MODELafter memories existSwitching to a different sentence-transformers model
A built-in HTTP server starts automatically with the MCP server, serving an interactive knowledge graph visualization.
Access locally:
Remote access via SSH:
Configuration:
To disable: add "--no-graph" to args in your MCP config.
Graph UI Features
Details Panel - View memory content, metadata, tags, and related memories
Timeline Panel - Browse memories chronologically, click to highlight in graph
Time Slider - Filter memories by date range, drag to explore history
Real-time Updates - Graph and timeline update via SSE when memories change
Filters - Tag/section dropdowns, zoom controls
Mermaid Rendering - Code blocks render as diagrams
Node Colors
π£ Tags - Purple shades by tag
π΄ Issues - Red (open), Orange (in progress), Green (resolved), Gray (won't fix)
π΅ TODOs - Blue (open), Orange (in progress), Green (completed), Red (blocked)
Node size reflects connection count.
Browse memories directly in Neovim with Telescope. Copy the plugin to your config:
Usage: Press <leader>sm to open the memory browser with fuzzy search and preview.
Requires: telescope.nvim, plenary.nvim, and memora installed in your Python environment.
For offline viewing, export memories as a static HTML file:
This is optional - the Live Graph Server provides the same visualization with real-time updates.
When using Cloudflare D1 as your database, the graph visualization is hosted on Cloudflare Pages - no local server needed.
Benefits:
Access from anywhere (no SSH tunneling)
Real-time updates via WebSocket
Multi-database support via
?db=parameterSecure access with Cloudflare Zero Trust
Setup:
Create D1 database:
npx wrangler d1 create memora-graph npx wrangler d1 execute memora-graph --file=memora-graph/schema.sqlDeploy Pages:
cd memora-graph npx wrangler pages deploy ./public --project-name=memora-graphConfigure bindings in Cloudflare Dashboard:
Pages β memora-graph β Settings β Bindings
Add D1:
DB_MEMORAβ your databaseAdd R2:
R2_MEMORAβ your bucket (for images)
Configure MCP with D1 URI:
{ "env": { "MEMORA_STORAGE_URI": "d1://<account-id>/<database-id>", "CLOUDFLARE_API_TOKEN": "<your-token>" } }
Access: https://memora-graph.pages.dev
Secure with Zero Trust:
Cloudflare Dashboard β Zero Trust β Access β Applications
Add application for
memora-graph.pages.devCreate policy with allowed emails
Pages β Settings β Enable Access Policy
See memora-graph/ for detailed setup and multi-database configuration.
Find and merge duplicate memories using AI-powered semantic comparison:
LLM Comparison analyzes memory pairs and returns:
verdict: "duplicate", "similar", or "different"confidence: 0.0-1.0 scorereasoning: Brief explanationsuggested_action: "merge", "keep_both", or "review"
Works with any OpenAI-compatible API (OpenAI, OpenRouter, Azure, etc.) via OPENAI_BASE_URL.
Structured tools for common memory types:
Manage relationships between memories: