# ๐ Personal Research Assistant MCP
[](https://www.python.org/downloads/)
[](https://modelcontextprotocol.io)
[](https://opensource.org/licenses/MIT)
A production-ready MCP (Model Context Protocol) server that enables semantic search across your personal research library. Built for AI Engineers who need fast, accurate document retrieval integrated with Claude Desktop and other AI tools.
## ๐ฏ Problem Statement
Researchers and professionals accumulate dozens of papers and documents but struggle to:
- Find relevant information across multiple documents
- Remember which paper contained specific insights
- Connect related concepts across different sources
- Spend 2+ hours daily searching for information
Traditional keyword search misses semantic connections, and reading everything is impractical.
## ๐ก Solution
An MCP server that:
- Indexes documents into a vector database using semantic embeddings
- Enables Claude (or any MCP client) to query your research library conversationally
- Provides sub-500ms response times with 85%+ retrieval accuracy
- Includes a Streamlit dashboard for management and metrics
## ๐๏ธ Architecture
```
Documents (PDF/DOCX/HTML/MD)
โ
Document Processor โ Text Chunker โ Embeddings
โ
ChromaDB Vector Store
โ
โโโ MCP Server (FastMCP) โ Claude Desktop
โโโ Streamlit UI โ Monitoring/Testing
```
## โจ Features
- **Semantic Search**: Natural language queries across your entire library
- **Multi-Format Support**: PDF, DOCX, HTML, Markdown, TXT
- **Fast Retrieval**: <500ms query latency on 1000+ chunks
- **MCP Integration**: Works with Claude Desktop, VS Code, and any MCP client
- **Metadata Extraction**: Automatically extracts titles, authors, keywords
- **Query Logging**: Track usage and performance metrics
- **Streamlit Dashboard**: Upload, search, and visualize metrics
## ๐ Performance Metrics
| Metric | Target | Actual |
|--------|--------|--------|
| Retrieval Accuracy | 85% | See [METRICS.md](docs/METRICS.md) |
| Query Latency | <500ms | See [METRICS.md](docs/METRICS.md) |
| Scale | 10k+ chunks | 1782+ chunks |
## ๐ Installation
### Prerequisites
- Python 3.11+
- 2GB RAM minimum
- Git
### Setup
```bash
# Clone repository
git clone https://github.com/yourusername/research-assistant-mcp.git
cd research-assistant-mcp
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Install local embeddings
pip install sentence-transformers
# Configure environment
cp .env.example .env
# Edit .env - add OPENAI_API_KEY if using OpenAI embeddings
```
### Download Sample Data
```bash
# Download 25 AI/ML papers from arXiv
python scripts/download_sample_papers.py --count 25
```
### Index Documents
```bash
# Index sample papers
python scripts/index_docs.py --folder ./sample_papers
# Or index your own documents
python scripts/index_docs.py --folder /path/to/your/papers --recursive
```
## ๐ Usage
### Start MCP Server
```bash
python mcp_server/server.py
```
### Configure Claude Desktop
Add to `claude_desktop_config.json`:
**Mac**: `~/Library/Application Support/Claude/claude_desktop_config.json`
**Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
```json
{
"mcpServers": {
"research-assistant": {
"command": "python",
"args": ["/full/path/to/research-assistant-mcp/mcp_server/server.py"],
"env": {}
}
}
}
```
Restart Claude Desktop.
### Launch Streamlit UI
```bash
streamlit run ui/app.py
```
Opens at `http://localhost:8501`
## ๐ ๏ธ MCP Tools
### `search_documents`
Semantic search across your library.
```
Query: "What are the challenges in RAG systems?"
Returns: Top-k results with sources, scores, and metadata
```
### `get_document_summary`
Get quick overview of a document.
```
Input: Document path or title
Returns: Title, author, keywords, preview
```
### `find_related_papers`
Find documents similar to a topic.
```
Query: "prompt engineering techniques"
Returns: Related papers with relevance scores
```
## ๐ Project Structure
```
research-assistant-mcp/
โโโ mcp_server/ # MCP server implementation
โ โโโ server.py
โโโ rag_pipeline/ # RAG components
โ โโโ config.py
โ โโโ document_processor.py
โ โโโ chunker.py
โ โโโ vector_store.py
โ โโโ retriever.py
โ โโโ metadata_extractor.py
โโโ ui/ # Streamlit dashboard
โ โโโ app.py
โ โโโ pages/
โโโ scripts/ # CLI utilities
โ โโโ index_docs.py
โ โโโ download_sample_papers.py
โโโ tests/ # Testing & benchmarks
โ โโโ sample_queries.json
โ โโโ benchmark_performance.py
โโโ data/ # Data storage
โ โโโ chroma_db/
โ โโโ query_logs/
โโโ docs/ # Documentation
โโโ METRICS.md
```
## ๐งช Testing
```bash
# Run performance benchmarks
python tests/benchmark_performance.py
# Output: Accuracy, latency, scale metrics
```
## ๐ณ Docker Deployment
```bash
# Build and run
docker-compose up -d
# Access UI at http://localhost:8501
# MCP server runs on localhost:8000
```
## ๐ Example Queries
1. **Cross-document synthesis**
"Compare different fine-tuning approaches for LLMs"
2. **Concept exploration**
"How does RLHF improve model alignment?"
3. **Technical details**
"Explain transformer attention mechanisms"
4. **Literature review**
"What are recent developments in RAG systems?"
## ๐ง Customization
### Change Embedding Model
Edit `.env`:
```bash
# OpenAI (paid, best quality)
EMBEDDING_MODEL=text-embedding-3-small
# Or use local (free) by default - already configured
```
### Adjust Chunk Size
Edit `.env`:
```bash
CHUNK_SIZE=1000 # Characters per chunk
CHUNK_OVERLAP=200 # Overlap between chunks
```
### Add Document Types
Edit `rag_pipeline/document_processor.py` to add new file type handlers.
## ๐ Troubleshooting
**ChromaDB errors**: Delete `data/chroma_db` and re-index
**Import errors**: Verify `pip install -r requirements.txt` completed
**UI blank**: Check browser console, try Chrome/Firefox
**Slow queries**: Reduce `TOP_K_RESULTS` in `.env`
## ๐ง Future Enhancements
- [ ] Auto-watch folder for new documents
- [ ] Cross-encoder reranking for better accuracy
- [ ] Multi-modal support (images, diagrams)
- [ ] Citation network graph
- [ ] Export to Notion/Obsidian
- [ ] Web interface (FastAPI + React)
## ๐ฅ Demo Video
[Link to 2-minute demo video - Coming soon]
## ๐ค Contributing
Contributions welcome! Please open issues or PRs.
## ๐ License
MIT License - see [LICENSE](LICENSE)
## ๐ Acknowledgments
- Built with [FastMCP](https://github.com/jlowin/fastmcp)
- Powered by [LangChain](https://python.langchain.com/)
- Vector storage by [ChromaDB](https://www.trychroma.com/)
---
**Built by** [Your Name] | [GitHub](https://github.com/yourusername) | [LinkedIn](https://linkedin.com/in/yourprofile)