Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Personal Research Assistant MCPWhat are the main challenges in RAG systems according to my research?"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
π Personal Research Assistant MCP
A production-ready MCP (Model Context Protocol) server that enables semantic search across your personal research library. Built for AI Engineers who need fast, accurate document retrieval integrated with Claude Desktop and other AI tools.
π― Problem Statement
Researchers and professionals accumulate dozens of papers and documents but struggle to:
Find relevant information across multiple documents
Remember which paper contained specific insights
Connect related concepts across different sources
Spend 2+ hours daily searching for information
Traditional keyword search misses semantic connections, and reading everything is impractical.
π‘ Solution
An MCP server that:
Indexes documents into a vector database using semantic embeddings
Enables Claude (or any MCP client) to query your research library conversationally
Provides sub-500ms response times with 85%+ retrieval accuracy
Includes a Streamlit dashboard for management and metrics
ποΈ Architecture
Documents (PDF/DOCX/HTML/MD)
β
Document Processor β Text Chunker β Embeddings
β
ChromaDB Vector Store
β
βββ MCP Server (FastMCP) β Claude Desktop
βββ Streamlit UI β Monitoring/Testingβ¨ Features
Semantic Search: Natural language queries across your entire library
Multi-Format Support: PDF, DOCX, HTML, Markdown, TXT
Fast Retrieval: <500ms query latency on 1000+ chunks
MCP Integration: Works with Claude Desktop, VS Code, and any MCP client
Metadata Extraction: Automatically extracts titles, authors, keywords
Query Logging: Track usage and performance metrics
Streamlit Dashboard: Upload, search, and visualize metrics
π Performance Metrics
Metric | Target | Actual |
Retrieval Accuracy | 85% | See METRICS.md |
Query Latency | <500ms | See METRICS.md |
Scale | 10k+ chunks | 1782+ chunks |
π Installation
Prerequisites
Python 3.11+
2GB RAM minimum
Git
Setup
# Clone repository
git clone https://github.com/yourusername/research-assistant-mcp.git
cd research-assistant-mcp
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Install local embeddings
pip install sentence-transformers
# Configure environment
cp .env.example .env
# Edit .env - add OPENAI_API_KEY if using OpenAI embeddingsDownload Sample Data
# Download 25 AI/ML papers from arXiv
python scripts/download_sample_papers.py --count 25Index Documents
# Index sample papers
python scripts/index_docs.py --folder ./sample_papers
# Or index your own documents
python scripts/index_docs.py --folder /path/to/your/papers --recursiveπ Usage
Start MCP Server
python mcp_server/server.pyConfigure Claude Desktop
Add to claude_desktop_config.json:
Mac: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"research-assistant": {
"command": "python",
"args": ["/full/path/to/research-assistant-mcp/mcp_server/server.py"],
"env": {}
}
}
}Restart Claude Desktop.
Launch Streamlit UI
streamlit run ui/app.pyOpens at http://localhost:8501
π οΈ MCP Tools
search_documents
Semantic search across your library.
Query: "What are the challenges in RAG systems?"
Returns: Top-k results with sources, scores, and metadataget_document_summary
Get quick overview of a document.
Input: Document path or title
Returns: Title, author, keywords, previewfind_related_papers
Find documents similar to a topic.
Query: "prompt engineering techniques"
Returns: Related papers with relevance scoresπ Project Structure
research-assistant-mcp/
βββ mcp_server/ # MCP server implementation
β βββ server.py
βββ rag_pipeline/ # RAG components
β βββ config.py
β βββ document_processor.py
β βββ chunker.py
β βββ vector_store.py
β βββ retriever.py
β βββ metadata_extractor.py
βββ ui/ # Streamlit dashboard
β βββ app.py
β βββ pages/
βββ scripts/ # CLI utilities
β βββ index_docs.py
β βββ download_sample_papers.py
βββ tests/ # Testing & benchmarks
β βββ sample_queries.json
β βββ benchmark_performance.py
βββ data/ # Data storage
β βββ chroma_db/
β βββ query_logs/
βββ docs/ # Documentation
βββ METRICS.mdπ§ͺ Testing
# Run performance benchmarks
python tests/benchmark_performance.py
# Output: Accuracy, latency, scale metricsπ³ Docker Deployment
# Build and run
docker-compose up -d
# Access UI at http://localhost:8501
# MCP server runs on localhost:8000π Example Queries
Cross-document synthesis
"Compare different fine-tuning approaches for LLMs"Concept exploration
"How does RLHF improve model alignment?"Technical details
"Explain transformer attention mechanisms"Literature review
"What are recent developments in RAG systems?"
π§ Customization
Change Embedding Model
Edit .env:
# OpenAI (paid, best quality)
EMBEDDING_MODEL=text-embedding-3-small
# Or use local (free) by default - already configuredAdjust Chunk Size
Edit .env:
CHUNK_SIZE=1000 # Characters per chunk
CHUNK_OVERLAP=200 # Overlap between chunksAdd Document Types
Edit rag_pipeline/document_processor.py to add new file type handlers.
π Troubleshooting
ChromaDB errors: Delete data/chroma_db and re-index
Import errors: Verify pip install -r requirements.txt completed
UI blank: Check browser console, try Chrome/Firefox
Slow queries: Reduce TOP_K_RESULTS in .env
π§ Future Enhancements
Auto-watch folder for new documents
Cross-encoder reranking for better accuracy
Multi-modal support (images, diagrams)
Citation network graph
Export to Notion/Obsidian
Web interface (FastAPI + React)
π₯ Demo Video
[Link to 2-minute demo video - Coming soon]
π€ Contributing
Contributions welcome! Please open issues or PRs.
π License
MIT License - see LICENSE