Skip to main content
Glama
chakri01

Personal Research Assistant MCP

by chakri01

๐Ÿ“š Personal Research Assistant MCP

Python 3.11+ MCP License: MIT

A production-ready MCP (Model Context Protocol) server that enables semantic search across your personal research library. Built for AI Engineers who need fast, accurate document retrieval integrated with Claude Desktop and other AI tools.

๐ŸŽฏ Problem Statement

Researchers and professionals accumulate dozens of papers and documents but struggle to:

  • Find relevant information across multiple documents

  • Remember which paper contained specific insights

  • Connect related concepts across different sources

  • Spend 2+ hours daily searching for information

Traditional keyword search misses semantic connections, and reading everything is impractical.

๐Ÿ’ก Solution

An MCP server that:

  • Indexes documents into a vector database using semantic embeddings

  • Enables Claude (or any MCP client) to query your research library conversationally

  • Provides sub-500ms response times with 85%+ retrieval accuracy

  • Includes a Streamlit dashboard for management and metrics

๐Ÿ—๏ธ Architecture

Documents (PDF/DOCX/HTML/MD) โ†“ Document Processor โ†’ Text Chunker โ†’ Embeddings โ†“ ChromaDB Vector Store โ†“ โ”œโ”€โ†’ MCP Server (FastMCP) โ†’ Claude Desktop โ””โ”€โ†’ Streamlit UI โ†’ Monitoring/Testing

โœจ Features

  • Semantic Search: Natural language queries across your entire library

  • Multi-Format Support: PDF, DOCX, HTML, Markdown, TXT

  • Fast Retrieval: <500ms query latency on 1000+ chunks

  • MCP Integration: Works with Claude Desktop, VS Code, and any MCP client

  • Metadata Extraction: Automatically extracts titles, authors, keywords

  • Query Logging: Track usage and performance metrics

  • Streamlit Dashboard: Upload, search, and visualize metrics

๐Ÿ“Š Performance Metrics

Metric

Target

Actual

Retrieval Accuracy

85%

See METRICS.md

Query Latency

<500ms

See METRICS.md

Scale

10k+ chunks

1782+ chunks

๐Ÿš€ Installation

Prerequisites

  • Python 3.11+

  • 2GB RAM minimum

  • Git

Setup

# Clone repository git clone https://github.com/yourusername/research-assistant-mcp.git cd research-assistant-mcp # Create virtual environment python -m venv venv source venv/bin/activate # Windows: venv\Scripts\activate # Install dependencies pip install -r requirements.txt # Install local embeddings pip install sentence-transformers # Configure environment cp .env.example .env # Edit .env - add OPENAI_API_KEY if using OpenAI embeddings

Download Sample Data

# Download 25 AI/ML papers from arXiv python scripts/download_sample_papers.py --count 25

Index Documents

# Index sample papers python scripts/index_docs.py --folder ./sample_papers # Or index your own documents python scripts/index_docs.py --folder /path/to/your/papers --recursive

๐Ÿ“– Usage

Start MCP Server

python mcp_server/server.py

Configure Claude Desktop

Add to claude_desktop_config.json:

Mac: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json

{ "mcpServers": { "research-assistant": { "command": "python", "args": ["/full/path/to/research-assistant-mcp/mcp_server/server.py"], "env": {} } } }

Restart Claude Desktop.

Launch Streamlit UI

streamlit run ui/app.py

Opens at http://localhost:8501

๐Ÿ› ๏ธ MCP Tools

search_documents

Semantic search across your library.

Query: "What are the challenges in RAG systems?" Returns: Top-k results with sources, scores, and metadata

get_document_summary

Get quick overview of a document.

Input: Document path or title Returns: Title, author, keywords, preview

Find documents similar to a topic.

Query: "prompt engineering techniques" Returns: Related papers with relevance scores

๐Ÿ“ Project Structure

research-assistant-mcp/ โ”œโ”€โ”€ mcp_server/ # MCP server implementation โ”‚ โ””โ”€โ”€ server.py โ”œโ”€โ”€ rag_pipeline/ # RAG components โ”‚ โ”œโ”€โ”€ config.py โ”‚ โ”œโ”€โ”€ document_processor.py โ”‚ โ”œโ”€โ”€ chunker.py โ”‚ โ”œโ”€โ”€ vector_store.py โ”‚ โ”œโ”€โ”€ retriever.py โ”‚ โ””โ”€โ”€ metadata_extractor.py โ”œโ”€โ”€ ui/ # Streamlit dashboard โ”‚ โ”œโ”€โ”€ app.py โ”‚ โ””โ”€โ”€ pages/ โ”œโ”€โ”€ scripts/ # CLI utilities โ”‚ โ”œโ”€โ”€ index_docs.py โ”‚ โ””โ”€โ”€ download_sample_papers.py โ”œโ”€โ”€ tests/ # Testing & benchmarks โ”‚ โ”œโ”€โ”€ sample_queries.json โ”‚ โ””โ”€โ”€ benchmark_performance.py โ”œโ”€โ”€ data/ # Data storage โ”‚ โ”œโ”€โ”€ chroma_db/ โ”‚ โ””โ”€โ”€ query_logs/ โ””โ”€โ”€ docs/ # Documentation โ””โ”€โ”€ METRICS.md

๐Ÿงช Testing

# Run performance benchmarks python tests/benchmark_performance.py # Output: Accuracy, latency, scale metrics

๐Ÿณ Docker Deployment

# Build and run docker-compose up -d # Access UI at http://localhost:8501 # MCP server runs on localhost:8000

๐Ÿ“ˆ Example Queries

  1. Cross-document synthesis
    "Compare different fine-tuning approaches for LLMs"

  2. Concept exploration
    "How does RLHF improve model alignment?"

  3. Technical details
    "Explain transformer attention mechanisms"

  4. Literature review
    "What are recent developments in RAG systems?"

๐Ÿ”ง Customization

Change Embedding Model

Edit .env:

# OpenAI (paid, best quality) EMBEDDING_MODEL=text-embedding-3-small # Or use local (free) by default - already configured

Adjust Chunk Size

Edit .env:

CHUNK_SIZE=1000 # Characters per chunk CHUNK_OVERLAP=200 # Overlap between chunks

Add Document Types

Edit rag_pipeline/document_processor.py to add new file type handlers.

๐Ÿ› Troubleshooting

ChromaDB errors: Delete data/chroma_db and re-index
Import errors: Verify pip install -r requirements.txt completed
UI blank: Check browser console, try Chrome/Firefox
Slow queries: Reduce TOP_K_RESULTS in .env

๐Ÿšง Future Enhancements

  • Auto-watch folder for new documents

  • Cross-encoder reranking for better accuracy

  • Multi-modal support (images, diagrams)

  • Citation network graph

  • Export to Notion/Obsidian

  • Web interface (FastAPI + React)

๐ŸŽฅ Demo Video

[Link to 2-minute demo video - Coming soon]

๐Ÿค Contributing

Contributions welcome! Please open issues or PRs.

๐Ÿ“„ License

MIT License - see LICENSE

๐Ÿ™ Acknowledgments


Built by [Your Name] | GitHub | LinkedIn

-
security - not tested
F
license - not found
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/chakri01/research-assistant-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server