Supports running the required Qdrant vector database instance through Docker containers
Supports Node.js environments for running the MCP server, requiring version 18 or higher
Uses OpenAI's text-embedding-ada-002 model for generating high-quality embeddings of code files for semantic search and relationship analysis
MCP Qdrant Codebase Embeddings
This MCP (Model Context Protocol) server uses Qdrant vector database to provide semantic understanding of your codebase. It enables AI assistants to understand relationships between files, find similar code patterns, and provide architectural insights.
Features
- Semantic File Search: Find files based on their purpose or functionality
- Relationship Discovery: Understand how files relate to each other (imports, exports, tests, etc.)
- Architecture Analysis: Get insights into code structure, dependencies, and potential issues
- Refactoring Suggestions: Find candidates for refactoring based on various criteria
- Code Pattern Recognition: Identify similar code patterns across the codebase
Setup
Prerequisites
- Qdrant: You need a running Qdrant instance
- Node.js: Version 18 or higher
Installation
- Install dependencies:
- Configure environment:
- Index your codebase:
Running the Server
MCP Tools Available
1. find_related_files
Find files that are semantically related to a given file.
Parameters:
filePath
: Path to the file to find relationships forlimit
: Maximum number of related files to return (default: 10)
Example:
2. search_by_purpose
Search for files that match a specific purpose or functionality.
Parameters:
purpose
: Description of the purpose or functionalitylimit
: Maximum number of files to return (default: 10)
Example:
3. analyze_architecture
Analyze the codebase architecture and identify patterns, issues, and insights.
Parameters:
rootPath
: Root path to analyze (optional, defaults to current directory)
Example:
4. find_refactoring_candidates
Find files that match criteria for refactoring.
Parameters:
criteria
: Description of what needs refactoringlimit
: Maximum number of files to return (default: 20)
Example:
5. explain_file_relationship
Explain the relationship between two files.
Parameters:
sourceFile
: Path to the source filetargetFile
: Path to the target file
Example:
How It Works
- Indexing: The system analyzes each file in your codebase and generates embeddings that capture:
- File structure and patterns
- Imports and exports
- Classes, functions, and interfaces
- Code complexity and style
- Vector Storage: These embeddings are stored in Qdrant, enabling fast similarity searches
- Relationship Analysis: The system can identify various types of relationships:
- Imports: Direct dependency relationships
- Exports: Provider-consumer relationships
- Similar: Files with similar structure or purpose
- Complementary: Files that work together
- Test: Test files and their implementations
- Implementation: Interface/type implementations
- Semantic Search: When you search by purpose, the system:
- Generates an embedding for your query
- Finds files with similar embeddings
- Returns the most relevant matches
Configuration
Using OpenAI Embeddings (Recommended)
For best results, provide an OpenAI API key in your .env
file. This uses OpenAI's text-embedding-ada-002 model.
Using Local Embeddings
If no OpenAI API key is provided, the system falls back to local embeddings based on code patterns. While less sophisticated, this still provides useful results without external dependencies.
Development
Building
Testing
Adding New Tools
To add new MCP tools:
- Add the tool definition in
src/index.ts
in theListToolsRequestSchema
handler - Add the handler in the
CallToolRequestSchema
switch statement - Implement the logic in the appropriate service
Troubleshooting
Qdrant Connection Issues
- Ensure Qdrant is running on the configured URL (default: http://localhost:6333)
- Check if the port is not blocked by firewall
- Verify QDRANT_URL in your .env file
Indexing Issues
- Check file permissions for the codebase directory
- Ensure sufficient memory for large codebases
- Monitor Qdrant logs for storage issues
Embedding Generation Issues
- If using OpenAI, verify your API key is valid
- Check rate limits if indexing large codebases
- Consider using local embeddings for development
Integration with AI Assistants
This MCP server is designed to work with AI assistants that support the Model Context Protocol. Configure your AI assistant to connect to this server to enable codebase-aware responses.
Claude Desktop Configuration
Add to your Claude desktop configuration:
License
This tool is part of the FamilyManager project and follows the same license terms.
This server cannot be installed
A Model Context Protocol server that provides semantic understanding of codebases using Qdrant vector database, enabling AI assistants to search files by purpose, discover relationships between files, analyze architecture, and identify refactoring opportunities.
Related MCP Servers
- -securityAlicense-qualityA Model Context Protocol (MCP) server that enables semantic search and retrieval of documentation using a vector database (Qdrant). This server allows you to add documentation from URLs or local files and then search through them using natural language queries.Last updated -1474JavaScriptApache 2.0
- -securityAlicense-qualityA Model Context Protocol server that enables AI agents to retrieve and understand entire codebases at once, providing tools to analyze local workspaces or remote GitHub repositories.Last updated -9TypeScriptMIT License
- -securityFlicense-qualityA server component of the Model Context Protocol that provides intelligent analysis of codebases using vector search and machine learning to understand code patterns, architectural decisions, and documentation.Last updated -4Python
- -securityAlicense-qualityA Model Context Protocol server that enables semantic search capabilities by providing tools to manage Qdrant vector database collections, process and embed documents using various embedding services, and perform semantic searches across vector embeddings.Last updated -89TypeScriptMIT License