Exposes MCP tools through FastAPI endpoints with OpenAPI documentation, enabling HTTP-based access to memory and vector search capabilities
Provides graph database querying capabilities through Cypher queries for knowledge graph operations and relationship management
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@MCP Aggregator Serversearch for recent conversations about authentication"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP Aggregator Server
Unified MCP interface that proxies requests to multiple backend MCP servers.
Architecture
Features
Unified Interface: Single MCP endpoint for all connected servers
Transparent Proxying: Automatically routes requests to appropriate backend servers
Health Monitoring: Built-in health checks for all connected servers
Retry Logic: Automatic retry with exponential backoff for failed requests
Error Handling: Comprehensive error handling and logging
Extensible: Easy to add new backend servers
Installation
Install dependencies:
Configure environment (edit
.env):
Running
Start all servers in order:
Terminal 1 - LTM Vector Server (Port 8000):
Terminal 2 - ZepAI FastMCP Server (Port 8002):
Note: This automatically loads the Memory Layer and exposes both FastAPI + MCP on port 8002
Terminal 3 - MCP Aggregator (Port 8003):
See
Available Tools
Health & Status
health_check()- Check health of all connected serversget_server_info()- Get information about connected servers
Memory Server Tools (Port 8002)
Search
memory_search(query, project_id, limit, use_llm_classification)- Search knowledge graphmemory_search_code(query, project_id, limit)- Search code memories
Ingest
memory_ingest_text(text, project_id, metadata)- Ingest plain textmemory_ingest_code(code, language, project_id, metadata)- Ingest codememory_ingest_json(data, project_id, metadata)- Ingest JSON datamemory_ingest_conversation(conversation, project_id)- Ingest conversation
Admin
memory_get_stats(project_id)- Get project statisticsmemory_get_cache_stats()- Get cache statistics
LTM Vector Server Tools (Port 8000)
Repository Processing
ltm_process_repo(repo_path)- Process repository for vector indexing
Vector Search
ltm_query_vector(query, top_k)- Query vector database for semantic code searchltm_search_file(filepath)- Search for specific file in vector database
File Management
ltm_add_file(filepath)- Add file to vector databaseltm_delete_by_filepath(filepath)- Delete file from vector databaseltm_delete_by_uuids(uuids)- Delete vectors by UUIDs
Code Analysis
ltm_chunk_file(file_path)- Chunk file using AST-based chunking
Testing
1. Check Server Health
2. Access OpenAPI Docs
3. Test a Tool via MCP
Configuration
Environment Variables
Variable | Default | Description |
|
| Aggregator server host |
|
| Aggregator server port |
|
| Memory server URL |
|
| Memory server timeout (seconds) |
|
| Graph server URL |
|
| Graph server timeout (seconds) |
|
| Logging level |
|
| Max retries for failed requests |
|
| Delay between retries (seconds) |
|
| Health check interval (seconds) |
Adding New Backend Servers
To add a new backend server (e.g., Graph Server):
Update :
Update :
Add tools in :
Troubleshooting
Connection Refused
Ensure all backend servers are running
Check URLs in
.envfileVerify ports are not blocked by firewall
Timeout Errors
Increase
MEMORY_SERVER_TIMEOUTorGRAPH_SERVER_TIMEOUTin.envCheck backend server performance
Verify network connectivity
Health Check Failing
Run
health_check()tool to diagnoseCheck backend server logs
Verify backend servers are responding
Development
Project Structure
Adding Logging
Future Enhancements
Add Graph/Vector DB server integration
Implement caching layer
Add request rate limiting
Implement server load balancing
Add metrics/monitoring
Support for server discovery
WebSocket support for real-time updates
License
Same as parent project (Innocody)