Supports running Qdrant in a Docker container for local vector database storage and querying
Respects .gitignore patterns when indexing codebases and provides integration with Git-managed projects
Uses OpenAI embeddings to power semantic code search, converting code into vector representations for meaning-based retrieval
Qdrant MCP Server
A Model Context Protocol (MCP) server that provides semantic code search capabilities using Qdrant vector database and OpenAI embeddings.
Features
- 🔍 Semantic Code Search - Find code by meaning, not just keywords
- 🚀 Fast Indexing - Efficient incremental indexing of large codebases
- 🤖 MCP Integration - Works seamlessly with Claude and other MCP clients
- 📊 Background Monitoring - Automatic reindexing of changed files
- 🎯 Smart Filtering - Respects .gitignore and custom patterns
- 💾 Persistent Storage - Embeddings stored in Qdrant for fast retrieval
Installation
Prerequisites
- Node.js 18+
- Python 3.8+
- Docker (for Qdrant) or Qdrant Cloud account
- OpenAI API key
Quick Start
Configuration
Environment Variables
Create a .env
file in your project root:
MCP Configuration
Add to your Claude Desktop config (~/.claude/config.json
):
Usage
Command Line Interface
In Claude
Once configured, you can use natural language queries:
- "Find all authentication code"
- "Show me files that handle user permissions"
- "What code is similar to the PaymentService class?"
- "Find all API endpoints related to users"
- "Show me error handling patterns in the codebase"
Programmatic Usage
Architecture
Advanced Configuration
Custom File Processors
Embedding Models
Support for multiple embedding providers:
Performance Optimization
Batch Processing
Incremental Indexing
Cost Estimation
Monitoring
Web UI (Coming Soon)
Logs
Metrics
- Files indexed
- Tokens processed
- Search queries per minute
- Average response time
- Cache hit rate
Troubleshooting
Common Issues
"Connection refused" error
- Ensure Qdrant is running:
docker ps
- Check QDRANT_URL is correct
- Verify firewall settings
"Rate limit exceeded" error
- Reduce batch size:
--batch-size 5
- Add delay between requests:
--delay 1000
- Use a different OpenAI tier
"Out of memory" error
- Process fewer files at once
- Increase Node.js memory:
NODE_OPTIONS="--max-old-space-size=4096"
- Use streaming mode for large files
Debug Mode
Contributing
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
Development Setup
License
MIT License - see LICENSE for details.
Acknowledgments
- Built for the Model Context Protocol
- Powered by Qdrant vector database
- Embeddings by OpenAI
- Originally developed for KinDash
Support
- 📧 Email: support@kindash.app
- 💬 Discord: Join our community
- 🐛 Issues: GitHub Issues
- 📖 Docs: Full Documentation
This server cannot be installed
Enables semantic code search across codebases using Qdrant vector database and OpenAI embeddings, allowing users to find code by meaning rather than just keywords through natural language queries.
Related MCP Servers
- -securityFlicense-qualityFacilitates knowledge graph representation with semantic search using Qdrant, supporting OpenAI embeddings for semantic similarity and robust HTTPS integration with file-based graph persistence.Last updated -334TypeScript
- -securityAlicense-qualityProvides RAG capabilities for semantic document search using Qdrant vector database and Ollama/OpenAI embeddings, allowing users to add, search, list, and delete documentation with metadata support.Last updated -54TypeScriptApache 2.0
- -securityAlicense-qualityEnables semantic search across multiple Qdrant vector database collections, supporting multi-query capability and providing semantically relevant document retrieval with configurable result counts.Last updated -46TypeScriptMIT License
- -securityFlicense-qualityThis server enables semantic search capabilities using Qdrant vector database and OpenAI embeddings, allowing users to query collections, list available collections, and view collection information.Last updated -2Python