The Cognee MCP server is a multi-functional tool for managing knowledge graphs with four main capabilities:
- Cognify: Converts text into a structured knowledge graph
- Codify: Transforms a codebase into a knowledge graph
- Search: Allows searching within the knowledge graph with customizable search types
- Prune: Simplifies and optimizes the knowledge graph as needed
cognee‑mcp - Run cognee’s memory engine as a Model Context Protocol server
Build memory for Agents and query from any client that speaks MCP – in your terminal or IDE.
✨ Features
- Multiple transports – choose Streamable HTTP --transport http (recommended for web deployments), SSE --transport sse (real‑time streaming), or stdio (classic pipe, default)
- Integrated logging – all actions written to a rotating file (see get_log_file_location()) and mirrored to console in dev
- Local file ingestion – feed .md, source files, Cursor rule‑sets, etc. straight from disk
- Background pipelines – long‑running cognify & codify jobs spawn off‑thread; check progress with status tools
- Developer rules bootstrap – one call indexes .cursorrules, .cursor/rules, AGENT.md, and friends into the developer_rules nodeset
- Prune & reset – wipe memory clean with a single prune call when you want to start fresh
Please refer to our documentation here for further information.
🚀 Quick Start
- Clone cognee repo
- Navigate to cognee-mcp subdirectory
- Install uv if you don't have one
- Install all the dependencies you need for cognee mcp server with uv
- Activate the virtual environment in cognee mcp directory
- Set up your OpenAI API key in .env for a quick setup with the default cognee configurations
- Run cognee mcp server with stdio (default)or stream responses over SSEor run with Streamable HTTP transport (recommended for web deployments)
You can do more advanced configurations by creating .env file using our template. To use different LLM providers / database configurations, and for more info check out our documentation.
🐳 Docker Usage
If you’d rather run cognee-mcp in a container, you have two options:
- Build locally
- Make sure you are in /cognee root directory and have a fresh
.env
containing only yourLLM_API_KEY
(and your chosen settings). - Remove any old image and rebuild:
- Run it:
- Make sure you are in /cognee root directory and have a fresh
- Pull from Docker Hub (no build required):
Important: Docker vs Direct Usage
Docker uses environment variables, not command line arguments:
- ✅ Docker:
-e TRANSPORT_MODE=http
- ❌ Docker:
--transport http
(won't work)
Direct Python usage uses command line arguments:
- ✅ Direct:
python src/server.py --transport http
- ❌ Direct:
-e TRANSPORT_MODE=http
(won't work)
🔗 MCP Client Configuration
After starting your Cognee MCP server with Docker, you need to configure your MCP client to connect to it.
SSE Transport Configuration (Recommended)
Start the server with SSE transport:
Configure your MCP client:
Claude CLI (Easiest)
Verify the connection:
You should see your server connected:
Manual Configuration
Claude (~/.claude.json
)
Cursor (~/.cursor/mcp.json
)
HTTP Transport Configuration (Alternative)
Start the server with HTTP transport:
Configure your MCP client:
Claude CLI (Easiest)
Verify the connection:
You should see your server connected:
Manual Configuration
Claude (~/.claude.json
)
Cursor (~/.cursor/mcp.json
)
Dual Configuration Example
You can configure both transports simultaneously for testing:
Note: Only enable the server you're actually running to avoid connection errors.
💻 Basic Usage
The MCP server exposes its functionality through tools. Call them from any MCP client (Cursor, Claude Desktop, Cline, Roo and more).
Available Tools
- cognify: Turns your data into a structured knowledge graph and stores it in memory
- codify: Analyse a code repository, build a code graph, stores it in memory
- search: Query memory – supports GRAPH_COMPLETION, RAG_COMPLETION, CODE, CHUNKS, INSIGHTS
- list_data: List all datasets and their data items with IDs for deletion operations
- delete: Delete specific data from a dataset (supports soft/hard deletion modes)
- prune: Reset cognee for a fresh start (removes all data)
- cognify_status / codify_status: Track pipeline progress
Data Management Examples:
Development and Debugging
Debugging
To use debugger, run:
bash mcp dev src/server.py
Open inspector with timeout passed:
http://localhost:5173?timeout=120000
To apply new changes while developing cognee you need to do:
- Update dependencies in cognee folder if needed
uv sync --dev --all-extras --reinstall
mcp dev src/server.py
Development
In order to use local cognee:
- Uncomment the following line in the cognee-mcp
pyproject.toml
file and set the cognee root path.Remember to replacefile:/Users/<username>/Desktop/cognee
with your actual cognee root path. - Install dependencies with uv in the mcp folder
Code of Conduct
We are committed to making open source an enjoyable and respectful experience for our community. See CODE_OF_CONDUCT for more information.
💫 Contributors
Star History
local-only server
The server can only run on the client's local machine because it depends on local resources.
Memory manager for AI apps and Agents using various graph and vector stores and allowing ingestion from 30+ data sources
Related MCP Servers
- AsecurityAlicenseAqualityMemory Bank Server provides a set of tools and resources for AI assistants to interact with Memory Banks. Memory Banks are structured repositories of information that help maintain context and track progress across multiple sessions.Last updated -1567943MIT License
- AsecurityAlicenseAqualityA flexible memory system for AI applications that supports multiple LLM providers and can be used either as an MCP server or as a direct library integration, enabling autonomous memory management without explicit commands.Last updated -325168MIT License
- AsecurityAlicenseAqualityAllows AI models to interact with SourceSync.ai's knowledge management platform to organize, ingest, retrieve, and search content in knowledge bases.Last updated -255003MIT License
- -securityFlicense-qualityA knowledge-graph-based memory system for AI agents that enables persistent information storage between conversations.Last updated -6