The Cognee MCP server is a multi-functional tool for managing knowledge graphs with four main capabilities:
Cognify: Converts text into a structured knowledge graph
Codify: Transforms a codebase into a knowledge graph
Search: Allows searching within the knowledge graph with customizable search types
Prune: Simplifies and optimizes the knowledge graph as needed
cognee‑mcp - Run cognee’s memory engine as a Model Context Protocol server
Build memory for Agents and query from any client that speaks MCP – in your terminal or IDE.
✨ Features
Multiple transports – choose Streamable HTTP --transport http (recommended for web deployments), SSE --transport sse (real‑time streaming), or stdio (classic pipe, default)
API Mode – connect to an already running Cognee FastAPI server instead of using cognee directly (see API Mode below)
Integrated logging – all actions written to a rotating file (see get_log_file_location()) and mirrored to console in dev
Local file ingestion – feed .md, source files, Cursor rule‑sets, etc. straight from disk
Background pipelines – long‑running cognify & codify jobs spawn off‑thread; check progress with status tools
Developer rules bootstrap – one call indexes .cursorrules, .cursor/rules, AGENT.md, and friends into the developer_rules nodeset
Prune & reset – wipe memory clean with a single prune call when you want to start fresh
Please refer to our documentation here for further information.
Related MCP server: Memory MCP
🚀 Quick Start
Clone cognee repo
git clone https://github.com/topoteretes/cognee.gitNavigate to cognee-mcp subdirectory
cd cognee/cognee-mcpInstall uv if you don't have one
pip install uvInstall all the dependencies you need for cognee mcp server with uv
uv sync --dev --all-extras --reinstallActivate the virtual environment in cognee mcp directory
source .venv/bin/activateSet up your OpenAI API key in .env for a quick setup with the default cognee configurations
LLM_API_KEY="YOUR_OPENAI_API_KEY"Run cognee mcp server with stdio (default)
python src/server.pyor stream responses over SSE
python src/server.py --transport sseor run with Streamable HTTP transport (recommended for web deployments)
python src/server.py --transport http --host 127.0.0.1 --port 8000 --path /mcp
You can do more advanced configurations by creating .env file using our template. To use different LLM providers / database configurations, and for more info check out our documentation.
🐳 Docker Usage
If you'd rather run cognee-mcp in a container, you have two options:
Build locally
Make sure you are in /cognee root directory and have a fresh
.envcontaining only yourLLM_API_KEY(and your chosen settings).Remove any old image and rebuild:
docker rmi cognee/cognee-mcp:main || true docker build --no-cache -f cognee-mcp/Dockerfile -t cognee/cognee-mcp:main .Run it:
# For HTTP transport (recommended for web deployments) docker run -e TRANSPORT_MODE=http --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main # For SSE transport docker run -e TRANSPORT_MODE=sse --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main # For stdio transport (default) docker run -e TRANSPORT_MODE=stdio --env-file ./.env --rm -it cognee/cognee-mcp:mainInstalling optional dependencies at runtime:
You can install optional dependencies when running the container by setting the
EXTRASenvironment variable:# Install a single optional dependency group at runtime docker run \ -e TRANSPORT_MODE=http \ -e EXTRAS=aws \ --env-file ./.env \ -p 8000:8000 \ --rm -it cognee/cognee-mcp:main # Install multiple optional dependency groups at runtime (comma-separated) docker run \ -e TRANSPORT_MODE=sse \ -e EXTRAS=aws,postgres,neo4j \ --env-file ./.env \ -p 8000:8000 \ --rm -it cognee/cognee-mcp:mainAvailable optional dependency groups:
aws- S3 storage supportpostgres/postgres-binary- PostgreSQL database supportneo4j- Neo4j graph database supportneptune- AWS Neptune supportchromadb- ChromaDB vector store supportscraping- Web scraping capabilitiesdistributed- Modal distributed executionlangchain- LangChain integrationllama-index- LlamaIndex integrationanthropic- Anthropic modelsgroq- Groq modelsmistral- Mistral modelsollama/huggingface- Local model supportdocs- Document processingcodegraph- Code analysismonitoring- Sentry & Langfuse monitoringredis- Redis supportAnd more (see pyproject.toml for full list)
Pull from Docker Hub (no build required):
# With HTTP transport (recommended for web deployments) docker run -e TRANSPORT_MODE=http --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main # With SSE transport docker run -e TRANSPORT_MODE=sse --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main # With stdio transport (default) docker run -e TRANSPORT_MODE=stdio --env-file ./.env --rm -it cognee/cognee-mcp:mainWith runtime installation of optional dependencies:
# Install optional dependencies from Docker Hub image docker run \ -e TRANSPORT_MODE=http \ -e EXTRAS=aws,postgres \ --env-file ./.env \ -p 8000:8000 \ --rm -it cognee/cognee-mcp:main
Important: Docker vs Direct Usage
Docker uses environment variables, not command line arguments:
✅ Docker:
-e TRANSPORT_MODE=http❌ Docker:
--transport http(won't work)
Direct Python usage uses command line arguments:
✅ Direct:
python src/server.py --transport http❌ Direct:
-e TRANSPORT_MODE=http(won't work)
Docker API Mode
To connect the MCP Docker container to a Cognee API server running on your host machine:
Simple Usage (Automatic localhost handling):
Note: The container will automatically convert localhost to host.docker.internal on Mac/Windows/Docker Desktop. You'll see a message in the logs showing the conversion.
Explicit host.docker.internal (Mac/Windows):
On Linux (use host network or container IP):
Environment variables for API mode:
API_URL: URL of the running Cognee API serverAPI_TOKEN: Authentication token (optional, required if API has authentication enabled)
Note: When running in API mode:
Database migrations are automatically skipped (API server handles its own DB)
Some features are limited (see API Mode Limitations)
🔗 MCP Client Configuration
After starting your Cognee MCP server with Docker, you need to configure your MCP client to connect to it.
SSE Transport Configuration (Recommended)
Start the server with SSE transport:
Configure your MCP client:
Claude CLI (Easiest)
Verify the connection:
You should see your server connected:
Manual Configuration
Claude (
Cursor (
HTTP Transport Configuration (Alternative)
Start the server with HTTP transport:
Configure your MCP client:
Claude CLI (Easiest)
Verify the connection:
You should see your server connected:
Manual Configuration
Claude (
Cursor (
Dual Configuration Example
You can configure both transports simultaneously for testing:
Note: Only enable the server you're actually running to avoid connection errors.
🌐 API Mode
The MCP server can operate in two modes:
Direct Mode (Default)
The MCP server directly imports and uses the cognee library. This is the default mode with full feature support.
API Mode
The MCP server connects to an already running Cognee FastAPI server via HTTP requests. This is useful when:
You have a centralized Cognee API server running
You want to separate the MCP server from the knowledge graph backend
You need multiple MCP servers to share the same knowledge graph
Starting the MCP server in API mode:
API Mode with different transports:
API Mode with Docker:
Command-line arguments for API mode:
--api-url: Base URL of the running Cognee FastAPI server (e.g.,http://localhost:8000)--api-token: Authentication token for the API (optional, required if API has authentication enabled)
Docker environment variables for API mode:
API_URL: Base URL of the running Cognee FastAPI serverAPI_TOKEN: Authentication token (optional, required if API has authentication enabled)
API Mode limitations: Some features are only available in direct mode:
codify(code graph pipeline)cognify_status/codify_status(pipeline status tracking)prune(data reset)get_developer_rules(developer rules retrieval)list_datawith specific dataset_id (detailed data listing)
Basic operations like cognify, search, delete, and list_data (all datasets) work in both modes.
💻 Basic Usage
The MCP server exposes its functionality through tools. Call them from any MCP client (Cursor, Claude Desktop, Cline, Roo and more).
Available Tools
cognify: Turns your data into a structured knowledge graph and stores it in memory
cognee_add_developer_rules: Ingest core developer rule files into memory
codify: Analyse a code repository, build a code graph, stores it in memory
delete: Delete specific data from a dataset (supports soft/hard deletion modes)
get_developer_rules: Retrieve all developer rules that were generated based on previous interactions
list_data: List all datasets and their data items with IDs for deletion operations
save_interaction: Logs user-agent interactions and query-answer pairs
prune: Reset cognee for a fresh start (removes all data)
search: Query memory – supports GRAPH_COMPLETION, RAG_COMPLETION, CODE, CHUNKS, SUMMARIES, CYPHER, and FEELING_LUCKY
cognify_status / codify_status: Track pipeline progress
Data Management Examples:
Development and Debugging
Debugging
To use debugger, run:
bash
mcp dev src/server.py
Open inspector with timeout passed:
http://localhost:5173?timeout=120000
To apply new changes while developing cognee you need to do:
Update dependencies in cognee folder if needed
uv sync --dev --all-extras --reinstallmcp dev src/server.py
Development
In order to use local cognee:
Uncomment the following line in the cognee-mcp
pyproject.tomlfile and set the cognee root path.#"cognee[postgres,codegraph,gemini,huggingface,docs,neo4j] @ file:/Users/<username>/Desktop/cognee"Remember to replace
file:/Users/<username>/Desktop/cogneewith your actual cognee root path.Install dependencies with uv in the mcp folder
uv sync --reinstall
Code of Conduct
We are committed to making open source an enjoyable and respectful experience for our community. See CODE_OF_CONDUCT for more information.