We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/sparshdrolia/Persistent-code-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server
llama_index_implementation.md•2.48 KiB
# LlamaIndex Integration Implementation
This document describes the implementation of the LlamaIndex integration for the Persistent-Code MCP server.
## Overview
The LlamaIndex integration enhances the Persistent-Code MCP server with semantic search capabilities, allowing Claude to understand code based on meaning rather than just keywords.
## Implementation
The implementation consists of the following components:
1. **LlamaIndexManager**: A new module that encapsulates all LlamaIndex functionality
2. **KnowledgeGraph**: Updated to use the LlamaIndexManager for semantic operations
3. **Test Scripts**: Added examples to demonstrate the semantic search capabilities
4. **Documentation**: Updated to explain the LlamaIndex integration
## Benefits
The LlamaIndex integration provides the following benefits:
1. **Semantic Search**: Find code components based on their meaning
2. **Vector Embeddings**: Efficient similarity matching for code components
3. **Knowledge Graph**: Graph-based representation of code relationships
4. **Contextual Retrieval**: More relevant code suggestions
## Usage
To use the semantic search capabilities:
1. Ensure LlamaIndex is enabled in the config
```python
from persistent_code.config import config
config.set("llama_index", "enabled", True)
```
2. Use the search_code method with natural language queries
```python
from persistent_code.knowledge_graph import KnowledgeGraph
kg = KnowledgeGraph("my_project")
results = kg.search_code("function to validate user input")
```
3. Try the semantic search CLI
```bash
python semantic_search_cli.py --project my_project --semantic --query "function to validate user input"
```
## Architecture
The architecture follows these design principles:
1. **Separation of Concerns**: LlamaIndex functionality is isolated in its own manager
2. **Error Handling**: Robust error handling for graceful fallback to basic search
3. **Configuration**: Easy configuration of LlamaIndex features
4. **Backward Compatibility**: Maintains compatibility with existing code
## Testing
To test the LlamaIndex integration:
```bash
python examples/test_semantic_search.py --project test_project --file examples/sample_code.py --query "function to handle authentication"
```
## Future Improvements
Potential future improvements include:
1. More sophisticated semantic triple extraction
2. Support for different embedding models
3. Caching of embeddings for better performance
4. Integration with external embedding services