Enables building and querying temporally-aware knowledge graphs through Neo4j database integration, supporting entity management, relationship tracking, and semantic search capabilities
Provides LLM operations and embeddings for knowledge graph processing, with support for GPT-4, GPT-5, O1, and O3 models including automatic parameter adjustment for reasoning models
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Graphiti Knowledge Graph MCP Serveradd our conversation about project timelines as an episode"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Graphiti MCP Server - Enhanced Fork
Graphiti is a framework for building and querying temporally-aware knowledge graphs, specifically tailored for AI agents operating in dynamic environments. Unlike traditional retrieval-augmented generation (RAG) methods, Graphiti continuously integrates user interactions, structured and unstructured enterprise data, and external information into a coherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical queries without requiring complete graph recomputation, making it suitable for developing interactive, context-aware AI applications.
This is an enhanced Model Context Protocol (MCP) server implementation for Graphiti. The MCP server exposes Graphiti's key functionality through the MCP protocol, allowing AI assistants to interact with Graphiti's knowledge graph capabilities.
Key Enhancements in This Fork
This enhanced version includes several important improvements over the original implementation:
π Latest Graphiti Core Compatibility - Uses the current version of graphiti-core with all latest features and improvements
π€ GPT-5, O1, O3 Model Support - Proper handling of OpenAI's reasoning models with automatic parameter adjustment (disables temperature, reasoning, and verbosity parameters)
π Token-Based Authentication - Production-ready nonce token authentication system enabling secure public deployment
π Queue Monitoring Tool - New
get_queue_statustool to monitor episode processing queues, showing pending tasks, active workers, and jobs currently being processedπΎ Redis-Based Persistent Queues - Worker queues backed by Redis with BRPOPLPUSH pattern for crash recovery and graceful shutdown support (SIGTERM/SIGINT handlers)
π‘οΈ Enhanced Security - Pure ASGI middleware-based authentication with constant-time token comparison to prevent timing attacks
π Password-Protected Graph Clearing -
clear_graphtool now requires password authentication via CLEAR_GRAPH_PASSWORD environment variableπ DNS Rebinding Protection - ALLOWED_HOSTS configuration for secure external access when binding to 0.0.0.0
π New - Discover and manage all group IDs across nodes and relationships in your knowledge graph
ποΈ Atomic Group Deletion - New
delete_everything_by_group_idtool for complete group removal in a single call (episodes, nodes, and edges)π Telemetry Control - Automatic disabling of telemetry for privacy-focused deployments (set before graphiti_core imports)
β‘ Simplified Dependencies - Removed Azure OpenAI dependencies for easier setup and deployment
π MCP 2025-06-18 Support - Uses the new Streamable HTTP transport standard (with SSE fallback for legacy clients)
π¦ Reproducible Builds - Tracked uv.lock file ensures consistent dependency versions across all deployments
ποΈ Modular Package Structure - Refactored into a well-organized Python package with 38 focused modules for better maintainability (see AGENTS.md for details)
About Azure Support
Note on Azure OpenAI: Azure OpenAI support was removed during refactoring due to implementation conflicts with the new authentication middleware. If you need Azure OpenAI support in this enhanced MCP server, pull requests are welcome! The original implementation can be found in the upstream Graphiti repository.
About This Fork
This fork maintains compatibility with the latest Graphiti core while adding production-ready features for secure public deployment. It focuses on OpenAI API compatibility and enhanced security features.
Features
The Graphiti MCP server exposes the following key high-level functions of Graphiti:
Episode Management: Add, retrieve, and delete episodes (text, messages, or JSON data)
Entity Management: Search and manage entity nodes and relationships in the knowledge graph
Search Capabilities: Search for facts (edges) and node summaries using semantic and hybrid search
Group Management: Organize and manage groups of related data with group_id filtering
Graph Maintenance: Clear the graph and rebuild indices
Quick Start
Clone this enhanced fork
or
For Claude Desktop and other stdio only clients
Note the full path to this directory.
Install the Graphiti prerequisites.
Configure Claude, Cursor, or other MCP client to use Graphiti with a . See the client documentation on where to find their MCP configuration files.
For Cursor and other HTTP-enabled clients (Recommended)
Configure your environment variables (copy
.env.exampleto.envand set yourOPENAI_API_KEY)Start the service using Docker Compose
Point your MCP client to:
http://localhost:8000/mcp(Streamable HTTP - MCP 2025-06-18 standard, recommended)http://localhost:8000/sse(Legacy SSE transport, for older clients)
For secure public deployment, see the Authentication Guide for setting up nonce token authentication.
Installation
Prerequisites
Ensure you have Python 3.10 or higher installed.
A running Neo4j database (version 5.26 or later required)
OpenAI API key for LLM operations
Setup
Clone the repository and navigate to the mcp_server directory
Use
uvto create a virtual environment and install dependencies:
Configuration
The server uses the following environment variables:
NEO4J_URI: URI for the Neo4j database (default:bolt://localhost:7687)NEO4J_USER: Neo4j username (default:neo4j)NEO4J_PASSWORD: Neo4j password (default:demodemo)OPENAI_API_KEY: OpenAI API key (required for LLM operations)OPENAI_BASE_URL: Optional base URL for OpenAI APIMODEL_NAME: OpenAI model name to use for LLM operations.SMALL_MODEL_NAME: OpenAI model name to use for smaller LLM operations.LLM_TEMPERATURE: Temperature for LLM responses (0.0-2.0).CLEAR_GRAPH_PASSWORD: Password required for theclear_graphtool. If not set, theclear_graphtool will be disabled and return an error when called.SEMAPHORE_LIMIT: Episode processing concurrency. See Concurrency and LLM Provider 429 Rate Limit ErrorsALLOWED_HOSTS: Comma-separated list of allowed hostnames for DNS rebinding protection (e.g.,graphiti.example.com,api.example.com). Required when running on0.0.0.0with external access.ALLOW_UNAUTHENTICATED_PUBLIC_ACCESS: Set totrueto allow running on0.0.0.0without authentication. β οΈ DANGEROUS - See security warning below.
β οΈ Security Warning: Public Access
The server will REFUSE to start if you bind to 0.0.0.0 without proper security configuration.
When binding to all interfaces (--host 0.0.0.0), you must configure ONE of:
MCP_SERVER_NONCE_TOKENS- Enable authentication (recommended)ALLOWED_HOSTS- Restrict to specific hostnamesALLOW_UNAUTHENTICATED_PUBLIC_ACCESS=true- Explicitly opt-out of security (NOT RECOMMENDED)
For local development, use --host 127.0.0.1 instead, which does not require security configuration.
See the Authentication Guide for detailed security configuration.
You can set these variables in a .env file in the project directory.
Running the Server
To run the Graphiti MCP server directly using uv:
With options:
Available arguments:
--model: Overrides theMODEL_NAMEenvironment variable.--small-model: Overrides theSMALL_MODEL_NAMEenvironment variable.--temperature: Overrides theLLM_TEMPERATUREenvironment variable.--transport: Choose the transport method:streamable-http(default): New MCP 2025-06-18 standard, endpoint at/mcpsse: Legacy SSE transport, endpoint at/ssestdio: Standard I/O transport for local processes
--group-id: Set a namespace for the graph (optional). If not provided, defaults to "default".--destroy-graph: If set, destroys all Graphiti graphs on startup.--use-custom-entities: Enable entity extraction using the predefined ENTITY_TYPES
Concurrency and LLM Provider 429 Rate Limit Errors
Graphiti's ingestion pipelines are designed for high concurrency, controlled by the SEMAPHORE_LIMIT environment variable.
By default, SEMAPHORE_LIMIT is set to 10 concurrent operations to help prevent 429 rate limit errors from your LLM provider. If you encounter such errors, try lowering this value.
If your LLM provider allows higher throughput, you can increase SEMAPHORE_LIMIT to boost episode ingestion performance.
Docker Deployment
The Graphiti MCP server can be deployed using Docker. The Dockerfile uses uv for package management, ensuring
consistent dependency installation.
Environment Configuration
Before running the Docker Compose setup, you need to configure the environment variables. You have two options:
Using a .env file (recommended):
Copy the provided
.env.examplefile to create a.envfile:cp .env.example .envEdit the
.envfile to set your OpenAI API key and other configuration options:# Required for LLM operations OPENAI_API_KEY=your_openai_api_key_here MODEL_NAME=gpt-4.1-mini # Optional: OPENAI_BASE_URL only needed for non-standard OpenAI endpoints # OPENAI_BASE_URL=https://api.openai.com/v1The Docker Compose setup is configured to use this file if it exists (it's optional)
Using environment variables directly:
You can also set the environment variables when running the Docker Compose command:
OPENAI_API_KEY=your_key MODEL_NAME=gpt-4.1-mini docker compose up
Neo4j Configuration
The Docker Compose setup includes a Neo4j container with the following default configuration:
Username:
neo4jPassword:
demodemoURI:
bolt://neo4j:7687(from within the Docker network)Memory settings optimized for development use
Running with Docker Compose
A Graphiti MCP container is available at: zepai/knowledge-graph-mcp. The latest build of this container is used by the Compose setup below.
Start the services using Docker Compose:
Or if you're using an older version of Docker Compose:
This will start both the Neo4j database and the Graphiti MCP server. The Docker setup:
Uses
uvfor package management and running the serverInstalls dependencies from the
pyproject.tomlfileConnects to the Neo4j container using the environment variables
Exposes the server on port 8000 with both transports:
/mcp- Streamable HTTP transport (MCP 2025-06-18 standard)/sse- Legacy SSE transport (for older clients)
Includes a healthcheck for Neo4j to ensure it's fully operational before starting the MCP server
Integrating with MCP Clients
Configuration
To use the Graphiti MCP server with an MCP-compatible client, configure it to connect to the server:
You will need the Python package manager,uv installed. Please refer to the uv.
Ensure that you set the full path to the uv binary and your Graphiti project folder.
For Streamable HTTP transport (MCP 2025-06-18 standard, recommended):
For legacy SSE transport (HTTP-based):
Available Tools
The Graphiti MCP server exposes the following tools:
add_episode: Add an episode to the knowledge graph (supports text, JSON, and message formats)search_nodes: Search the knowledge graph for relevant node summariessearch_facts: Search the knowledge graph for relevant facts (edges between entities)delete_entity_edge: Delete an entity edge from the knowledge graphdelete_episode: Delete an episode from the knowledge graphdelete_everything_by_group_id: Delete all data (episodes, nodes, and entity edges) associated with a group_id. This is an atomic operation that completely removes a group from the system in a single call. Returns counts of deleted entities.get_entity_edge: Get an entity edge by its UUIDget_episodes: Get the most recent episodes for a specific groupget_queue_status: Get the current status of all episode processing queues. Shows total pending tasks, active workers, and per-group_id queue details. Use this to monitor background processing after adding memories.clear_graph: Clear all data from the knowledge graph and rebuild indices. Requires password authentication - thepasswordparameter must match theCLEAR_GRAPH_PASSWORDenvironment variable. IfCLEAR_GRAPH_PASSWORDis not configured on the server, this tool will be disabled and return an error.get_status: Get the status of the Graphiti MCP server and Neo4j connection
Using the X-Group-Id Header
When using HTTP-based transports (Streamable HTTP or SSE), you can pass one or more group_id values via the X-Group-Id HTTP header. This header supports comma-separated values and acts as an allowlist for group_ids.
Behavior
Single group_id in header: Used as the fixed group_id for all tool calls (tool parameters are ignored)
Multiple group_ids in header (comma-separated): Acts as an allowlist - only these group_ids are permitted
Tool parameters that match an allowed group_id are accepted
Tool parameters not in the allowlist are rejected with an error message that shows which group_ids are allowed
If no tool parameter is provided, the first allowed group_id is used
This is useful for:
Multi-tenant deployments: Each client can send their tenant ID(s) in the header, ensuring data isolation without relying on tool parameters
API gateways: Upstream proxies can inject the allowed group_ids based on authentication/authorization
Security: Clients cannot access group_ids not specified in the header allowlist
Error Messages
When a tool call uses a group_id not in the allowlist, the error message includes the allowed group_ids:
For tools that accept multiple group_ids:
Priority Order
The group_id is determined based on the header configuration:
With single group_id in header:
Header group_id is always used (tool parameter ignored)
With multiple group_ids in header (allowlist):
Tool parameter (if in allowlist)
CLI default (if in allowlist)
First entry in allowlist (fallback)
Without header:
Tool parameter
CLI default (from
--group-idargument)Empty string (fallback)
Example Usage
When the header contains tenant-123, tenant-456, tenant-789, tool calls can only use one of these three group_ids. Any attempt to use a different group_id will be rejected.
MCP Client Configuration with Custom Headers
If your MCP client supports custom headers, configure it like this:
Working with JSON Data
The Graphiti MCP server can process structured JSON data through the add_episode tool with source="json". This
allows you to automatically extract entities and relationships from structured data:
Integrating with the Cursor IDE
To integrate the Graphiti MCP Server with the Cursor IDE, follow these steps:
Run the Graphiti MCP server:
Hint: specify a group_id to namespace graph data. If you do not specify a group_id, the server will use "default" as the group_id.
or
Configure Cursor to connect to the Graphiti MCP server.
For legacy SSE transport, use http://localhost:8000/sse instead.
Add the Graphiti rules to Cursor's User Rules. See cursor_rules.md for details.
Kick off an agent session in Cursor.
The integration enables AI assistants in Cursor to maintain persistent memory through Graphiti's knowledge graph capabilities.
Integrating with Claude Desktop (Docker MCP Server)
The Graphiti MCP Server container supports both Streamable HTTP (MCP 2025-06-18) and legacy SSE transports. Claude Desktop may require a gateway like mcp-remote for HTTP-based transports.
Run the Graphiti MCP server:
docker compose up(Optional) Install : If you prefer to have
mcp-remoteinstalled globally, or if you encounter issues withnpxfetching the package, you can install it globally. Otherwise,npx(used in the next step) will handle it for you.npm install -g mcp-remoteConfigure Claude Desktop: Open your Claude Desktop configuration file (usually
claude_desktop_config.json) and add or modify themcpServerssection as follows:{ "mcpServers": { "graphiti-memory": { // You can choose a different name if you prefer "command": "npx", // Or the full path to mcp-remote if npx is not in your PATH "args": [ "mcp-remote", "http://localhost:8000/mcp" // Use /mcp for Streamable HTTP or /sse for legacy SSE ] } } }If you already have an
mcpServersentry, addgraphiti-memory(or your chosen name) as a new key within it.Restart Claude Desktop for the changes to take effect.
Requirements
Python 3.10 or higher
Neo4j database (version 5.26 or later required)
OpenAI API key (for LLM operations and embeddings)
MCP-compatible client
Telemetry
The Graphiti MCP server uses the Graphiti core library, which includes anonymous telemetry collection. When you initialize the Graphiti MCP server, anonymous usage statistics are collected to help improve the framework.
What's Collected
Anonymous identifier and system information (OS, Python version)
Graphiti version and configuration choices (LLM provider, database backend, embedder type)
No personal data, API keys, or actual graph content is ever collected
How to Disable
To disable telemetry in the MCP server, set the environment variable:
Or add it to your .env file:
For complete details about what's collected and why, see the Telemetry section in the main Graphiti README.
Development
Updating Dependencies
This project uses uv for dependency management. The uv.lock file is committed to ensure reproducible builds across all environments.
To update dependencies (without requiring a local Python installation):
This command:
Runs a temporary container with
uvinstalledMounts your project directory
Updates the
uv.lockfile with the latest compatible versions
After updating, commit the changes:
License
This project is licensed under the same license as the parent Graphiti project.