Uses OpenAI's embedding models to convert text documents into vectors for semantic search, with automatic rate limiting and error handling for API requests.
Qdrant MCP Server
A Model Context Protocol (MCP) server that provides semantic search capabilities using a local Qdrant vector database and OpenAI embeddings.
Features
Semantic Search: Natural language search across your document collections
Metadata Filtering: Filter search results by metadata fields using Qdrant's powerful filter syntax
Local Vector Database: Runs Qdrant locally via Docker for complete data privacy
Automatic Embeddings: Uses OpenAI's embedding models to convert text to vectors
Rate Limiting: Intelligent request throttling with exponential backoff to prevent API rate limit errors
MCP Integration: Works seamlessly with Claude Code and other MCP clients
Collection Management: Create, list, and delete vector collections
Document Operations: Add, search, and delete documents with metadata support
Prerequisites
Node.js 18+
Docker and Docker Compose
OpenAI API key
Installation
Clone the repository:
Install dependencies:
Set up environment variables:
Edit .env
and add your OpenAI API key:
Start Qdrant:
Build the project:
Usage
Running the Server
For development:
For production:
Claude Code Configuration (Linux)
Add this to your Claude Code configuration file at ~/.claude/claude_code_config.json
:
Replace YOUR_USERNAME
and the path with your actual username and installation path.
Restart Claude Code after making this change.
Available Tools
create_collection
Create a new vector collection.
Parameters:
name
(string, required): Collection namedistance
(string, optional): Distance metric - "Cosine", "Euclid", or "Dot" (default: "Cosine")
Example:
add_documents
Add documents to a collection with automatic embedding generation.
Parameters:
collection
(string, required): Collection namedocuments
(array, required): Array of documents with:id
(string/number, required): Unique identifier (string IDs are automatically normalized to UUID format)text
(string, required): Document text contentmetadata
(object, optional): Additional metadata
Note: String IDs are automatically normalized to UUID format for Qdrant compatibility. The normalization is deterministic, so the same string ID will always produce the same UUID.
Example:
semantic_search
Search for documents using natural language.
Parameters:
collection
(string, required): Collection to searchquery
(string, required): Search querylimit
(number, optional): Max results (default: 5)filter
(object, optional): Metadata filter in Qdrant format
Filter Format:
The filter parameter accepts Qdrant's native filter format for powerful metadata-based filtering:
You can also use more complex filters:
Multiple conditions (AND): Use
must
with multiple conditionsAny condition (OR): Use
should
with multiple conditionsNegation (NOT): Use
must_not
with conditions
Example with multiple conditions:
Examples:
Basic search:
With single filter:
With multiple filters (AND):
list_collections
List all available collections.
get_collection_info
Get detailed information about a collection.
Parameters:
name
(string, required): Collection name
delete_collection
Delete a collection and all its documents.
Parameters:
name
(string, required): Collection name
delete_documents
Delete specific documents from a collection.
Parameters:
collection
(string, required): Collection nameids
(array, required): Array of document IDs to delete (string IDs are automatically normalized to UUID format)
Note: String IDs will be normalized to the same UUID format used when adding documents.
Available Resources
qdrant://collections
- List all collectionsqdrant://collection/{name}
- Collection details and statistics
Project Structure
Example Workflow
Create a collection:
Create a collection called "knowledge-base"Add documents:
Add these documents to knowledge-base: - id: "doc1", text: "MCP is a protocol for AI model context", metadata: {"type": "definition", "category": "protocol"} - id: "doc2", text: "Vector databases store embeddings for semantic search", metadata: {"type": "definition", "category": "database"} - id: "doc3", text: "Qdrant provides high-performance vector similarity search", metadata: {"type": "product", "category": "database"}Search without filters:
Search knowledge-base for "how does semantic search work"Search with filters:
Search knowledge-base for "vector database" with filter {"must": [{"key": "category", "match": {"value": "database"}}]}Get collection information:
Get info about "knowledge-base" collectionView all collections:
What collections do I have?
Configuration Options
Environment Variables
OPENAI_API_KEY
(required): Your OpenAI API keyQDRANT_URL
(optional): Qdrant server URL (default: http://localhost:6333)OPENAI_EMBEDDING_MODEL
(optional): Embedding model (default: text-embedding-3-small)OPENAI_EMBEDDING_DIMENSIONS
(optional): Custom embedding dimensionsOPENAI_MAX_REQUESTS_PER_MINUTE
(optional): Maximum OpenAI API requests per minute (default: 3500)OPENAI_RETRY_ATTEMPTS
(optional): Number of retry attempts for rate limit errors (default: 3)OPENAI_RETRY_DELAY
(optional): Initial retry delay in milliseconds with exponential backoff (default: 1000)
Embedding Models
Available OpenAI models:
text-embedding-3-small
(1536 dims, faster, cheaper)text-embedding-3-large
(3072 dims, higher quality)
Advanced Features
Rate Limiting and Error Handling
The server implements robust rate limiting to handle OpenAI API limits gracefully:
Features:
Request Throttling: Queues requests to stay within OpenAI's rate limits (configurable, default: 3500 requests/minute)
Exponential Backoff: Automatically retries failed requests with increasing delays (1s, 2s, 4s, 8s...)
Retry-After Header Support: Respects OpenAI's retry guidance with validation fallback for invalid headers
Typed Error Handling: Uses OpenAIError interface for type-safe error detection and handling
Smart Error Detection: Identifies rate limit errors (429 status) vs other failures
User Feedback: Clear console messages during retry attempts with estimated wait times
Configuration:
Benefits:
Prevents failed operations during high-volume usage
Automatic recovery from temporary API issues
Optimized for batch document processing
Works seamlessly with both single and batch embedding operations
Metadata Filtering
The server supports Qdrant's powerful filtering capabilities for refined search results. Filters can be applied to any metadata field stored with your documents.
Supported filter types:
Match filters: Exact value matching for strings, numbers, and booleans
Logical operators:
must
(AND),should
(OR),must_not
(NOT)Range filters: Greater than, less than, between (for numeric values)
Nested filters: Complex boolean expressions
See the semantic_search
tool documentation for filter syntax examples.
Troubleshooting
Qdrant connection errors
Make sure Qdrant is running:
If not running:
Collection doesn't exist
Create the collection first before adding documents:
OpenAI API errors
Verify your API key is correct in
.env
Check your OpenAI account has available credits
Ensure you have access to the embedding models
Rate limit errors
The server automatically handles rate limits, but if you see persistent rate limit errors:
Reduce
OPENAI_MAX_REQUESTS_PER_MINUTE
to match your OpenAI tier (free: 500, paid: 3500+)Increase
OPENAI_RETRY_ATTEMPTS
for more resilient retriesCheck your OpenAI dashboard for current usage and limits
Filter errors
If you encounter "Bad Request" errors with filters:
Ensure you're using Qdrant's native filter format
Check that field names match your metadata exactly
Verify the filter structure has proper nesting
Development
Development Mode
Run in development mode with auto-reload:
Build
Build for production:
Type Checking
Run TypeScript type checking without emitting files:
Continuous Integration
The project uses GitHub Actions for CI/CD:
Build: Compiles TypeScript to JavaScript
Type Check: Validates TypeScript types with strict mode
Test: Runs all 140 unit and functional tests (129 unit + 11 functional)
Multi-version: Tests on Node.js 18, 20, and 22
The CI workflow runs on every push and pull request to the main branch.
Testing
The project includes comprehensive unit and integration tests using Vitest.
Run Tests
Test Coverage
The test suite includes 140 tests (129 unit + 11 functional) covering:
Unit Tests:
QdrantManager (21 tests): Collection management, point operations, and search functionality
OpenAIEmbeddings (25 tests): Embedding generation, batch processing, rate limiting with exponential backoff, Retry-After header validation, and typed error handling
MCP Server (19 tests): Tool schemas, resource URI patterns, and MCP protocol compliance
Functional Tests:
Live API Integration (11 tests): Real OpenAI embeddings, production MCP server validation, rate limiting behavior, and end-to-end workflows with 30+ real documents
Coverage Highlights:
✅ 100% function coverage across all modules
✅ Comprehensive rate limiting tests with timing validation
✅ Typed error handling with OpenAIError interface
✅ Invalid Retry-After header fallback to exponential backoff
✅ Real-world validation with live OpenAI API
✅ Both source and compiled code tested
See docs/test_report.md
for detailed test results and coverage analysis.
Writing Tests
Tests are located next to the files they test with a .test.ts
extension:
src/qdrant/client.test.ts
- Qdrant client wrapper testssrc/embeddings/openai.test.ts
- OpenAI embeddings provider testssrc/index.test.ts
- MCP server integration tests
Run tests before committing:
License
This project is licensed under the MIT License - see the LICENSE file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Before Submitting a PR
Run tests: Ensure all tests pass
npm test -- --runType check: Verify no TypeScript errors
npm run type-checkBuild: Confirm the project builds successfully
npm run build
All pull requests will automatically run through CI checks that validate:
TypeScript compilation
Type checking
Test suite (114 tests)
Compatibility with Node.js 18, 20, and 22
Note for Repository Owners
After forking or cloning, update the CI badge URL in README.md:
Replace YOUR_USERNAME
with your GitHub username or organization name.
This server cannot be installed
local-only server
The server can only run on the client's local machine because it depends on local resources.
Enables semantic search and document management using a local Qdrant vector database with OpenAI embeddings. Supports natural language queries, metadata filtering, and collection management for AI-powered document retrieval.