Skip to main content
Glama

cognee-mcp

cognee‑mcp - Run cognee’s memory engine as a Model Context Protocol server

Build memory for Agents and query from any client that speaks MCP – in your terminal or IDE.

✨ Features

  • Multiple transports – choose Streamable HTTP --transport http (recommended for web deployments), SSE --transport sse (real‑time streaming), or stdio (classic pipe, default)
  • Integrated logging – all actions written to a rotating file (see get_log_file_location()) and mirrored to console in dev
  • Local file ingestion – feed .md, source files, Cursor rule‑sets, etc. straight from disk
  • Background pipelines – long‑running cognify & codify jobs spawn off‑thread; check progress with status tools
  • Developer rules bootstrap – one call indexes .cursorrules, .cursor/rules, AGENT.md, and friends into the developer_rules nodeset
  • Prune & reset – wipe memory clean with a single prune call when you want to start fresh

Please refer to our documentation here for further information.

🚀 Quick Start

  1. Clone cognee repo
    git clone https://github.com/topoteretes/cognee.git
  2. Navigate to cognee-mcp subdirectory
    cd cognee/cognee-mcp
  3. Install uv if you don't have one
    pip install uv
  4. Install all the dependencies you need for cognee mcp server with uv
    uv sync --dev --all-extras --reinstall
  5. Activate the virtual environment in cognee mcp directory
    source .venv/bin/activate
  6. Set up your OpenAI API key in .env for a quick setup with the default cognee configurations
    LLM_API_KEY="YOUR_OPENAI_API_KEY"
  7. Run cognee mcp server with stdio (default)
    python src/server.py
    or stream responses over SSE
    python src/server.py --transport sse
    or run with Streamable HTTP transport (recommended for web deployments)
    python src/server.py --transport http --host 127.0.0.1 --port 8000 --path /mcp

You can do more advanced configurations by creating .env file using our template. To use different LLM providers / database configurations, and for more info check out our documentation.

🐳 Docker Usage

If you’d rather run cognee-mcp in a container, you have two options:

  1. Build locally
    1. Make sure you are in /cognee root directory and have a fresh .env containing only your LLM_API_KEY (and your chosen settings).
    2. Remove any old image and rebuild:
      docker rmi cognee/cognee-mcp:main || true docker build --no-cache -f cognee-mcp/Dockerfile -t cognee/cognee-mcp:main .
    3. Run it:
      # For HTTP transport (recommended for web deployments) docker run -e TRANSPORT_MODE=http --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main # For SSE transport docker run -e TRANSPORT_MODE=sse --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main # For stdio transport (default) docker run -e TRANSPORT_MODE=stdio --env-file ./.env --rm -it cognee/cognee-mcp:main
  2. Pull from Docker Hub (no build required):
    # With HTTP transport (recommended for web deployments) docker run -e TRANSPORT_MODE=http --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main # With SSE transport docker run -e TRANSPORT_MODE=sse --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main # With stdio transport (default) docker run -e TRANSPORT_MODE=stdio --env-file ./.env --rm -it cognee/cognee-mcp:main

Important: Docker vs Direct Usage

Docker uses environment variables, not command line arguments:

  • ✅ Docker: -e TRANSPORT_MODE=http
  • ❌ Docker: --transport http (won't work)

Direct Python usage uses command line arguments:

  • ✅ Direct: python src/server.py --transport http
  • ❌ Direct: -e TRANSPORT_MODE=http (won't work)

🔗 MCP Client Configuration

After starting your Cognee MCP server with Docker, you need to configure your MCP client to connect to it.

Start the server with SSE transport:

docker run -e TRANSPORT_MODE=sse --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main

Configure your MCP client:

Claude CLI (Easiest)
claude mcp add cognee-sse -t sse http://localhost:8000/sse

Verify the connection:

claude mcp list

You should see your server connected:

Checking MCP server health... cognee-sse: http://localhost:8000/sse (SSE) - ✓ Connected
Manual Configuration

Claude (~/.claude.json)

{ "mcpServers": { "cognee": { "type": "sse", "url": "http://localhost:8000/sse" } } }

Cursor (~/.cursor/mcp.json)

{ "mcpServers": { "cognee-sse": { "url": "http://localhost:8000/sse" } } }

HTTP Transport Configuration (Alternative)

Start the server with HTTP transport:

docker run -e TRANSPORT_MODE=http --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main

Configure your MCP client:

Claude CLI (Easiest)
claude mcp add cognee-http -t http http://localhost:8000/mcp

Verify the connection:

claude mcp list

You should see your server connected:

Checking MCP server health... cognee-http: http://localhost:8000/mcp (HTTP) - ✓ Connected
Manual Configuration

Claude (~/.claude.json)

{ "mcpServers": { "cognee": { "type": "http", "url": "http://localhost:8000/mcp" } } }

Cursor (~/.cursor/mcp.json)

{ "mcpServers": { "cognee-http": { "url": "http://localhost:8000/mcp" } } }

Dual Configuration Example

You can configure both transports simultaneously for testing:

{ "mcpServers": { "cognee-sse": { "type": "sse", "url": "http://localhost:8000/sse" }, "cognee-http": { "type": "http", "url": "http://localhost:8000/mcp" } } }

Note: Only enable the server you're actually running to avoid connection errors.

💻 Basic Usage

The MCP server exposes its functionality through tools. Call them from any MCP client (Cursor, Claude Desktop, Cline, Roo and more).

Available Tools

  • cognify: Turns your data into a structured knowledge graph and stores it in memory
  • codify: Analyse a code repository, build a code graph, stores it in memory
  • search: Query memory – supports GRAPH_COMPLETION, RAG_COMPLETION, CODE, CHUNKS, INSIGHTS
  • list_data: List all datasets and their data items with IDs for deletion operations
  • delete: Delete specific data from a dataset (supports soft/hard deletion modes)
  • prune: Reset cognee for a fresh start (removes all data)
  • cognify_status / codify_status: Track pipeline progress

Data Management Examples:

# List all available datasets and data items list_data() # List data items in a specific dataset list_data(dataset_id="your-dataset-id-here") # Delete specific data (soft deletion - safer, preserves shared entities) delete(data_id="data-uuid", dataset_id="dataset-uuid", mode="soft") # Delete specific data (hard deletion - removes orphaned entities) delete(data_id="data-uuid", dataset_id="dataset-uuid", mode="hard")

Development and Debugging

Debugging

To use debugger, run: bash mcp dev src/server.py

Open inspector with timeout passed: http://localhost:5173?timeout=120000

To apply new changes while developing cognee you need to do:

  1. Update dependencies in cognee folder if needed
  2. uv sync --dev --all-extras --reinstall
  3. mcp dev src/server.py

Development

In order to use local cognee:

  1. Uncomment the following line in the cognee-mcp pyproject.toml file and set the cognee root path.
    #"cognee[postgres,codegraph,gemini,huggingface,docs,neo4j] @ file:/Users/<username>/Desktop/cognee"
    Remember to replace file:/Users/<username>/Desktop/cognee with your actual cognee root path.
  2. Install dependencies with uv in the mcp folder
    uv sync --reinstall

Code of Conduct

We are committed to making open source an enjoyable and respectful experience for our community. See CODE_OF_CONDUCT for more information.

💫 Contributors

Star History

Deploy Server
A
security – no known vulnerabilities
A
license - permissive license
A
quality - confirmed to work

local-only server

The server can only run on the client's local machine because it depends on local resources.

Memory manager for AI apps and Agents using various graph and vector stores and allowing ingestion from 30+ data sources

  1. Installing Manually
    1. Installing via Smithery
      1. Development

        Related MCP Servers

        • A
          security
          A
          license
          A
          quality
          Memory Bank Server provides a set of tools and resources for AI assistants to interact with Memory Banks. Memory Banks are structured repositories of information that help maintain context and track progress across multiple sessions.
          Last updated -
          15
          679
          43
          MIT License
        • A
          security
          A
          license
          A
          quality
          A flexible memory system for AI applications that supports multiple LLM providers and can be used either as an MCP server or as a direct library integration, enabling autonomous memory management without explicit commands.
          Last updated -
          3
          251
          68
          MIT License
        • A
          security
          A
          license
          A
          quality
          Allows AI models to interact with SourceSync.ai's knowledge management platform to organize, ingest, retrieve, and search content in knowledge bases.
          Last updated -
          25
          500
          3
          MIT License
          • Apple
          • Linux
        • -
          security
          F
          license
          -
          quality
          A knowledge-graph-based memory system for AI agents that enables persistent information storage between conversations.
          Last updated -
          6

        View all related MCP servers

        MCP directory API

        We provide all the information about MCP servers via our MCP API.

        curl -X GET 'https://glama.ai/api/mcp/v1/servers/topoteretes/cognee'

        If you have feedback or need assistance with the MCP directory API, please join our Discord server