Uses LangChain for LLM orchestration and chaining within the RAG pipeline to process queries and generate responses.
Implements a state machine architecture for the RAG pipeline with multi-stage processing including guardrails, Cypher generation, retrieval, and response generation.
Provides a RAG system that converts natural language queries into Cypher queries to retrieve information from a Neo4j graph database, with support for dynamic graph schema configuration and intelligent query routing.
Integrates OpenAI models for dual LLM strategy: fast models for guardrails decision-making and accurate models for Cypher query generation and answering.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@SOLVRO MCP - Knowledge Graph RAG SystemWhat courses are taught by Professor Kowalski in the Computer Science department?"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Frontend │────▶│ ToPWR API │────▶│ MCP Server │────▶│ Neo4j │
│ :80 │ │ :8000 │ │ :8005 │ │ :7687 │
└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
React + Nginx FastAPI FastMCP Knowledge GraphPWrChat UI - React chatbot (session sidebar, dark/light mode toggle, persistent theme)
Intelligent Query Routing - Guardrails system determines query relevance
Natural Language to Cypher - Converts questions to graph queries
Knowledge Graph RAG - Retrieval-Augmented Generation with Neo4j
MCP Protocol - Standard Model Context Protocol interface
Observability - Optional Langfuse tracing integration
Docker Ready - One command deployment
Quick Start
# Setup
just setup
cp .env.example .env # Edit with your API keys
# Run with Docker
just up # Start Neo4j + MCP Server + API
just logs # View logs
just down # Stop servicesArchitecture
System Overview
┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Frontend │────▶│ ToPWR API │────▶│ MCP Server │────▶│ Neo4j │
│ :80 │ │ :8000 │ │ :8005 │ │ :7687 │
└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
React + Nginx FastAPI FastMCP Knowledge GraphService | Port | Description |
| 80 | PWrChat — React chatbot UI served by Nginx |
| 8000 | FastAPI backend for ToPWR app |
| 8005 | MCP server with RAG pipeline |
| 7474/7687 | Knowledge graph database |
RAG Pipeline
The heart of the system is a LangGraph-based RAG pipeline that intelligently processes user queries:
Pipeline Flow:
Guardrails - Fast LLM determines if query is relevant to knowledge base
Cypher Generation - Accurate LLM converts natural language to Cypher query
Retrieval - Execute query against Neo4j knowledge graph
Response - Return structured context data
Data Pipeline
Separate ETL pipeline for ingesting documents into the knowledge graph:
Pipeline Steps:
Document Loading - PDF and text document ingestion
Text Extraction - OCR and content extraction
LLM Processing - Generate Cypher queries from content
Graph Population - Execute queries to build knowledge graph
Configuration
Copy .env.example to .env and configure:
########################################
# LLM / AI Provider Keys
########################################
# OpenAI API key (optional)
OPENAI_API_KEY=
# DeepSeek API key (optional)
DEEPSEEK_API_KEY=
# Google Generative AI / PaLM API key (optional)
GOOGLE_API_KEY=
# CLARIN LLM API key (optional, used by API & client)
CLARIN_API_KEY=
########################################
# Langfuse Observability
########################################
LANGFUSE_SECRET_KEY=
LANGFUSE_PUBLIC_KEY=
LANGFUSE_HOST=https://cloud.langfuse.com
########################################
# Neo4j Database
########################################
# URI used by data pipeline, MCP server and graph config
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=
########################################
# MCP Server Networking
########################################
# Bind host for the MCP server process
MCP_BIND_HOST=0.0.0.0
# Host/port used by API and MCP client to reach the MCP server
MCP_HOST=127.0.0.1
MCP_PORT=8005Commands
# Docker Stack
just up # Start all services (including frontend at :80)
just down # Stop services
just logs # View logs
just ps # Service status
just nuke # Remove everything
# Local Development
just mcp-server # Run MCP server
just api # Run FastAPI
just kg "query" # Query knowledge graph
# Frontend
just frontend-install # Install npm dependencies
just frontend-dev # Start dev server at :3000 (requires running API)
just frontend-build # Build for production
# Quality
just lint # Format & lint
just test # Run tests
just ci # Full CI pipeline
# Data Pipeline
just prefect-up # Start Prefect
just pipeline # Run ETLProject Structure
src/
├── mcp_server/ # MCP server + RAG pipeline
├── mcp_client/ # CLI client
├── topwr_api/ # FastAPI backend
├── config/ # Configuration
└── data_pipeline/ # Prefect ETL flows
frontend/
├── src/
│ ├── api/ # API client
│ ├── hooks/ # useUserId, useSessions, useChat, useTheme
│ ├── components/ # Sidebar, Chat, shared UI
│ └── types/ # TypeScript mirrors of backend models
└── package.json # React + Vite + TailwindCSS
docker/
├── compose.stack.yml # Main stack (Neo4j + MCP + API + Frontend)
├── compose.prefect.yml # Data pipeline
├── Dockerfile.mcp # MCP server image
├── Dockerfile.api # FastAPI image
├── Dockerfile.frontend # React + Nginx image
└── nginx.conf # SPA fallback + API proxyAPI Usage
Chat Endpoint
curl -X POST http://localhost:8000/api/chat \
-H "Content-Type: application/json" \
-d '{"user_id": "user1", "message": "Czym jest nagroda dziekana?"}'Response:
{
"session_id": "abc123",
"message": "Nagroda dziekana to wyróżnienie przyznawane...",
"metadata": {
"source": "mcp_knowledge_graph",
"trace_id": "xyz789"
}
}Session Management
# Get session history
curl http://localhost:8000/api/sessions/{session_id}/history
# List user sessions
curl http://localhost:8000/api/users/{user_id}/sessionsTech Stack
Technology | Purpose |
React 18 + TypeScript | Frontend chat UI |
Vite + TailwindCSS v3 | Build tooling & styling |
Nginx | Frontend serving + API proxy |
FastMCP | Model Context Protocol server |
LangGraph | RAG state machine |
LangChain | LLM orchestration |
Neo4j | Knowledge graph database |
FastAPI | REST API backend |
Langfuse | Observability (optional) |
Prefect | Data pipeline orchestration |
Docker | Containerization |
License
MIT © Solvro
This server cannot be installed
Resources
Looking for Admin?
Admins can modify the Dockerfile, update the server description, and track usage metrics. If you are the server author, to access the admin panel.