Elasticsearch Knowledge Graph for MCP
This server is an Elasticsearch Knowledge Graph for MCP that provides persistent memory for AI conversations, enabling structured storage, retrieval, and management of knowledge across interactions.
You can:
Create, update, and delete entities (people, concepts, projects) with attributes and relevance scores
Manage relationships between entities with support for automatic entity creation
Search using powerful Elasticsearch queries with fuzzy matching and wildcards
Add observations to existing entities and mark important ones for relevance scoring
Organize memory into distinct zones with capabilities to create, delete, merge, and copy entities between zones
Retrieve recent entities and detailed information about specific entities and their relationships
Get statistics about entities and relationships in specific zones
Automate maintenance with features like cascading deletions and conflict resolution during merges
Uses Elasticsearch as the backend for the knowledge graph, providing distributed, scalable storage for entities and relations with advanced search capabilities
Provides a TypeScript interface to Elasticsearch with all core operations for interacting with the knowledge graph
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Elasticsearch Knowledge Graph for MCPwhat do we know about Emma's upcoming birthday?"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
mcp-brain-tools
An MCP server that gives AI agents persistent memory with built-in freshness tracking and spaced repetition. Backed by Elasticsearch.
Unlike simple key-value memory stores, mcp-brain-tools tracks how old each piece of knowledge is, flags what needs review, and lets agents verify information to keep it fresh — inspired by how spaced repetition helps humans retain knowledge.
Features
Spaced repetition freshness — each entity has a review interval that doubles on verification (capped at 365 days). Confidence labels (fresh/normal/aging/stale/archival) tell agents what to trust.
Progressive search — queries return fresh results first, automatically widening to include older data only when needed.
Observations as entities — each observation gets its own freshness lifecycle, so "build is broken" (1-day review) and "founded in 2015" (365-day review) age independently.
Memory zones — isolate knowledge by project, team, or domain.
AI-powered filtering — optional Groq integration scores search results by relevance.
DRY by design — tool descriptions guide agents not to store what's already in code, git, or docs.
Related MCP server: Logseq MCP Tools
Setup
Prerequisites
Node.js >= 18
Docker (for Elasticsearch) or a remote Elasticsearch instance
Install and build
npm install
npm run buildStart Elasticsearch
npm run es:startOr point to your own instance via ES_NODE environment variable.
Configure your MCP client
Add to your Claude Code, Claude Desktop, or other MCP client config:
{
"mcpServers": {
"memory": {
"command": "node",
"args": ["/path/to/mcp-brain-tools/dist/index.js"],
"env": {
"ES_NODE": "http://localhost:9200",
"GROQ_API_KEY": "your-key-here"
}
}
}
}GROQ_API_KEY is optional — enables AI-powered search filtering and zone relevance scoring.
Install the auto-memory hook (Claude Code only)
The memory hook runs on every user message and automatically injects relevant context — no agent cooperation needed.
Add to ~/.claude/settings.json:
{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "node /path/to/mcp-brain-tools/dist/memory-hook.js"
}
]
}
]
}
}The hook uses the same ES_NODE, AI_API_KEY/GROQ_API_KEY, AI_API_BASE, and AI_MODEL env vars (set them in the env block of your settings, or export them in your shell profile).
AI_API_BASE defaults to Groq's endpoint but accepts any OpenAI-compatible API URL.
How it works
Entities and observations
Entities represent anything worth remembering — people, projects, decisions, facts. Each entity has:
A name and type
Spaced repetition fields:
verifiedAt,reviewInterval,nextReviewAtA confidence label computed from freshness:
1 - (daysSinceVerified / reviewInterval)
Observations are stored as separate entities linked via is_observation_of relations. Each observation has its own review cadence:
Entity: "iaptic-server" (type: Project, reviewInterval: 30 days)
<- "iaptic-server: uses TypeScript" (reviewInterval: 180 days)
<- "iaptic-server: migration in progress" (reviewInterval: 7 days)Freshness lifecycle
Entity created —
confidence: "fresh", default review in 7 daysReview date passes —
confidence: "aging",needsReview: trueAgent verifies (via
verify_entity) — interval doubles, confidence resets to freshLong overdue —
confidence: "stale"then"archival", excluded from default search
Progressive search
When searching, the server uses three passes:
freshness >= 0— fresh and normal entitiesfreshness >= -2— adds aging and staleNo filter — adds archival
This keeps results clean while ensuring nothing is permanently lost.
MCP Tools
Tool | Description |
| Create entities with optional observations and reviewInterval |
| Update existing entities |
| Delete entities (with optional cascade) |
| Add observations as separate entities with own freshness |
| Confirm entity is still accurate, extend review interval |
| Search with progressive freshness filtering |
| Get specific entities by name with freshness metadata |
| Get recently accessed entities |
| Create relationships between entities |
| Remove relationships |
| AI-powered entity retrieval with tentative answers |
| AI-powered file content inspection |
| List memory zones (with AI relevance scoring) |
| Manage memory zones |
| Transfer entities between zones |
| Merge zones with conflict resolution |
| Get entity/relation counts for a zone |
| Boost entity relevance score |
| Get current UTC time |
Environment variables
Variable | Default | Description |
|
| Elasticsearch URL |
| — | Elasticsearch username |
| — | Elasticsearch password |
| — | Groq API key for AI filtering |
|
| Comma-separated model list |
|
| Elasticsearch index prefix |
|
| Default memory zone |
|
| Enable debug logging |
Recommended agent instructions
For agents to actively use the memory server, add something like this to your CLAUDE.md (or equivalent instructions file):
## Memory
Use MCP Memory (`mcp__memory__*` tools) — a shared knowledge graph across all agents, projects, and computers.
**When to SAVE (immediately, before moving on):**
- Something you tried didn't work (non-transient) → save what failed and why, so no agent repeats it
- A decision was made (architectural, design, workflow) → save the decision and the reason
- The user corrects you or gives explicit instructions → save the rule
- You learn something non-obvious that took effort to discover → save it
**When to SEARCH (before starting, not after failing):**
- **At the start of every non-trivial task** — search before thinking, not after hitting a wall
- About to try an approach that might have been attempted before → search first
- User references something from a past session → search before asking
**Rules:**
- Skip anything easy to find in code, git log, or docs
- Use the project name as the zone for project-specific knowledge; `default` for general knowledge
- Keep entries short — the AI filters server-side, so be generous rather than selective
- Short `reviewInterval` (e.g. 3–7 days) for volatile facts; longer (30–180) for stable onesThe key insight: agents need explicit trigger-based instructions ("when X, do Y"), not just descriptions of what the tool does.
Development
npm run build # Compile TypeScript
npm run dev # Watch mode
npm run test:jest # Run Jest tests
npm run es:start # Start Elasticsearch
npm run es:stop # Stop Elasticsearch
npm run es:reset # Wipe data and restart
npm run import # Import from JSON
npm run export # Export to JSONLicense
MIT
Appeared in Searches
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/j3k0/mcp-brain-tools'
If you have feedback or need assistance with the MCP directory API, please join our Discord server