Uses LangChain for LLM orchestration and chaining within the RAG pipeline to process queries and generate responses.
Implements a state machine architecture for the RAG pipeline with multi-stage processing including guardrails, Cypher generation, retrieval, and response generation.
Provides a RAG system that converts natural language queries into Cypher queries to retrieve information from a Neo4j graph database, with support for dynamic graph schema configuration and intelligent query routing.
Integrates OpenAI models for dual LLM strategy: fast models for guardrails decision-making and accurate models for Cypher query generation and answering.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@SOLVRO MCP - Knowledge Graph RAG SystemWhat courses are taught by Professor Kowalski in the Computer Science department?"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Intelligent Query Routing - Guardrails system determines query relevance
Natural Language to Cypher - Converts questions to graph queries
Knowledge Graph RAG - Retrieval-Augmented Generation with Neo4j
MCP Protocol - Standard Model Context Protocol interface
Observability - Optional Langfuse tracing integration
Docker Ready - One command deployment
Quick Start
Architecture
System Overview
Service | Port | Description |
| 8000 | FastAPI backend for ToPWR app |
| 8005 | MCP server with RAG pipeline |
| 7474/7687 | Knowledge graph database |
RAG Pipeline
The heart of the system is a LangGraph-based RAG pipeline that intelligently processes user queries:
Pipeline Flow:
Guardrails - Fast LLM determines if query is relevant to knowledge base
Cypher Generation - Accurate LLM converts natural language to Cypher query
Retrieval - Execute query against Neo4j knowledge graph
Response - Return structured context data
Data Pipeline
Separate ETL pipeline for ingesting documents into the knowledge graph:
Pipeline Steps:
Document Loading - PDF and text document ingestion
Text Extraction - OCR and content extraction
LLM Processing - Generate Cypher queries from content
Graph Population - Execute queries to build knowledge graph
Configuration
Copy .env.example to .env and configure:
Commands
Project Structure
API Usage
Chat Endpoint
Response:
Session Management
Tech Stack
Technology | Purpose |
FastMCP | Model Context Protocol server |
LangGraph | RAG state machine |
LangChain | LLM orchestration |
Neo4j | Knowledge graph database |
FastAPI | REST API backend |
Langfuse | Observability (optional) |
Prefect | Data pipeline orchestration |
Docker | Containerization |
License
MIT © Solvro