Provides persistent vector memory storage and semantic search capabilities for the Windsurf editor, allowing for project-specific context retrieval and documentation ingestion.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@MCP Memory Serversearch for architectural decisions in project-thaama"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP Memory Server
A persistent vector memory server for Windsurf, VS Code, and other MCP-compliant editors.
Features
Local Vectors: Uses
LanceDBandall-MiniLM-L6-v2locally. No API keys required.Persistence: Memories are saved to disk (
./mcp_memory_data).Isolation: Supports multiple projects via
project_id.
Installation
You need Python 3.10+ installed.
Setup Virtual Environment It's recommended to use a virtual environment to avoid conflicts.
python3 -m venv .venv source .venv/bin/activateInstall Dependencies
pip install -e .
Configuration
Add this to your mcpServers configuration (e.g., in ~/.codeium/windsurf/mcp_config.json or VS Code MCP settings).
Windsurf / VS Code Config
Replace /ABSOLUTE/PATH/TO/... with the actual path to this directory.
Usage
Ingestion
Use the included helper script ingest.sh to ingest context files.
Real Example (Project "Thaama"):
Tools
The server exposes:
memory_search(project_id, q)memory_add(project_id, id, text)
Troubleshooting
First Run: The first time you run it, it will download the embedding model (approx 100MB). This might take a few seconds.
Logs: Check the editor's MCP logs if connection fails.