MCP-RAG is a low-latency retrieval-augmented generation service providing intelligent knowledge management through a modular MCP protocol architecture.
Core Capabilities:
Knowledge Management
Add text content manually (facts, definitions, notes, conversation summaries)
Process 25+ document formats (PDF, DOCX, PPTX, XLSX, TXT, HTML, CSV, JSON, XML, ODT, ODP, ODS, RTF, images with OCR, emails) using advanced semantic chunking with structure preservation, automatic denoising, and metadata extraction
Get comprehensive statistics on document counts, file type distribution, processing methods, and structural complexity
Intelligent Retrieval
Query the knowledge base with semantic search (<100ms latency)
Use Raw mode for direct retrieval or Summary mode for LLM-powered intelligent summarization
Apply filters for targeted search by file type, document structure (tables, titles), or processing method
Performance Optimization
Monitor and optimize vector database performance with health diagnostics
Reindex with optimized profiles (small/medium/large/auto)
Manage embedding cache with performance monitoring (hit rates, memory usage) and cache clearing
Technical Features
Multi-provider support (Doubao, Ollama for LLMs; Doubao API and local sentence-transformers for embeddings)
Web interface for configuration management, document management, and API documentation (Swagger UI)
HTTP API and MCP protocol support
Enables local AI model inference for RAG operations, allowing private document processing and knowledge base interactions without sending data to external services.
Provides RAG (Retrieval-Augmented Generation) capabilities using OpenAI's language models and embedding models for intelligent document processing, semantic search, and knowledge base question answering.
MCP-RAG: Low-Latency RAG Service
基于 MCP (Model Context Protocol) 协议的低延迟 RAG (Retrieval-Augmented Generation) 服务架构。
特性
极低延迟 (<100ms) 本地知识检索
双模式支持: Raw 模式 (直接检索) 和 Summary 模式 (检索+摘要)
LLM 总结功能: 支持 Doubao、Ollama 等 LLM 提供商进行智能摘要
模块化架构: MCP Server 作为统一知识接口层
异步优化: 异步调用与模型预热机制
可扩展设计: 预留 reranker 与缓存模块接口
技术栈
后端框架: FastAPI
向量数据库: ChromaDB (本地部署)
嵌入模型: Doubao 嵌入 API (默认), 本地模型可选 (m3e-small / e5-small via sentence-transformers)
LLM 模型: Doubao API, Ollama (本地部署)
协议: MCP (Model Context Protocol)
包管理: uv (现代化 Python 包管理器)
快速开始
1. 环境要求
Python >= 3.13
uv 包管理器
2. 安装依赖
3. 启动服务
首次启动会报错(懒得改)
配置好配置文件就没问题了
web配置页面
访问配置页面:
http://localhost:8000/config-page访问资料管理页面:
http://localhost:8000/documents-page使用 HTTP API:
http://localhost:8000/docs(Swagger UI)
4. 配置管理
MCP-RAG 现在使用 JSON 文件进行持久化配置管理
data\config.json 文件存储配置信息,支持通过 Web 界面进行修改和保存。
默认配置示例:
MCP 服务器配置
小智go服务端能通过 MCP 协议与 MCP-RAG 进行交互。以下是一个示例配置:
5. 使用 MCP 工具
许可证
MIT License
贡献
欢迎提交 Issue 和 Pull Request!