Readme_MCP.md•8.77 kB
# Model Context Protocol (MCP) – Comprehensive Guide
**"The guide I needed when I had no idea why anyone would build an MCP server for an AI assistant." — adapted from Hailey Quach, original tutorial**
---
## Table of Contents
- [Introduction](#introduction)
- [The Evolution of Tool Integration with LLMs](#the-evolution-of-tool-integration-with-llms)
- [What is MCP, Really?](#what-is-mcp-really)
- [MCP vs RAG](#mcp-vs-rag)
- [Why MCP Matters](#why-mcp-matters)
- [Core Capabilities of an MCP Server](#core-capabilities-of-an-mcp-server)
- [Real-World Example: Claude Desktop + MCP](#real-world-example-claude-desktop--mcp)
- [Build Your Own: Step-by-Step Server Setup](#build-your-own-step-by-step-server-setup)
- [Best Practices, Tips & Issues](#best-practices-tips--issues)
- [References & Attributions](#references--attributions)
---
## Introduction
Most AI tool integration and memory hacks feel like patchwork until you hit scale or production. **Model Context Protocol (MCP)** changes that: it gives your LLM a persistent, programmable bridge to external tools and data, all through a standardized, modular interface.
---
## The Evolution of Tool Integration with LLMs
The journey of integrating tools with LLMs has evolved through several stages:
1. **Standalone LLMs:** Only able to access information present in their training data. No interaction with external sources.
2. **Tool Binding:** Code-level hacks (e.g., LangChain's `.bind_tools`) for each script—non-scalable, stateless, hard to manage across different applications (each app, each agent needs custom tool code).
3. **Application Integration Hell:** Each AI app (IDE, chat, desktop) needed its own glue for every tool/data source—resulting in a tangled and fragile setup.
**Enter MCP:** A unified, protocol-driven adapter layer that lets your LLM app talk to resources, memory, and external tools, all standardized.
---
## What is MCP, Really?
MCP stands for **Model Context Protocol**, where:
- **Model:** Your LLM (Claude, GPT, etc.).
- **Context:** The extra info/tools/models you want it to leverage (docs, API, user history).
- **Protocol:** Standardized, modular, two-way communication between client/host and server.
> MCP is like the "USB-C port for AI applications"—described by Anthropic. It standardizes how AI applications interact with external data, tools, and memory.
---
## MCP vs RAG
Here's a comparison between MCP and RAG (Retrieval-Augmented Generation):
| Topic | MCP | RAG |
|-------------------|------------------------------------------------|----------------------------------------------|
| Abstraction | Protocol-based, modular, plug-and-play | App-specific code, tightly-coupled |
| Add/Update Tools | Server-side change only | Update client/app code per new tool |
| Communication | Bidirectional, supports memory and tools | Mostly one-way, context injection |
| Maintainability | High, moves fast without breaking the client | Low, prone to code spaghetti |
| Code Example | Configure new server/tools, host unchanged | New retrievers/loaders/embeddings in app |
**Key Takeaway:** MCP provides a cleaner, more maintainable approach to integrating external tools and data with LLMs compared to traditional RAG implementations.
---
## Why MCP Matters
MCP offers several significant advantages:
- **No more brittle glue code:** Clean, scalable. Change servers and tools—not app/client code.
- **Application mobility:** Your LLM assistant can access anything—files, APIs, cloud, persistent memory—without statelessness.
- **Bidirectional, persistent memory:** True, "stateful" AI assistants that can remember user history, preferences, session/project context.
- **Rapid iteration:** Experiment and update tools/data with zero risk to host code.
---
## Core Capabilities of an MCP Server
An MCP server can expose three main types of capabilities:
- **Resources:** Document or data sources (read-only)—PDFs, APIs, vector DBs.
- **Tools:** Functions your LLM can invoke (run code, search, organize files).
- **Prompts:** Predefined templates to structure LLM output.
All are exposed through one protocol, making it easy to extend functionality without modifying client code.
---
## Real-World Example: Claude Desktop + MCP
**Goal:** Make Claude Desktop interact with your file system.
### Step-by-Step Setup:
1. **Download and install Claude Desktop.**
2. **Install Node.js** (required for running MCP servers).
3. **Enable Developer Mode** in Claude Desktop.
4. **Edit config** (`claude_desktop_config.json`):
**For macOS:**
```json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/yourusername/Desktop",
"/Users/yourusername/Downloads"
]
}
}
}
```
**For Windows:**
```json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"C:\\Users\\yourusername\\Desktop",
"C:\\Users\\yourusername\\Downloads"
]
}
}
}
```
(Replace paths/usernames accordingly.)
5. **Restart Claude Desktop.** The Developer "hammer" icon appears—your custom server is loaded.
6. **Try Real Tasks:** e.g.,
- "Save this file to Desktop."
- "Search my Downloads for PDFs."
**Result:** Claude uses MCP tools to read, write, move, and find files safely—with full user control and approval.
---
## Build Your Own: Step-by-Step Server Setup
### Requirements
- Python 3.10+
- [uv package manager](https://github.com/astral-sh/uv)
- MCP SDK (`pip install "mcp[cli]"`)
- Node.js (optional for KVM/desktop integrations)
- (Optional) [Serper API Key](https://serper.dev/) for live web search
### Step 1: Install Packages
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
uv init mcp-server
cd mcp-server
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv add "mcp[cli]" httpx beautifulsoup4 python-dotenv
```
### Step 2: Create Server
Create a `main.py` file with your MCP server implementation:
```python
from mcp.server.fastmcp import FastMCP
import ...
mcp = FastMCP("docs")
@mcp.tool()
async def get_docs(query: str, library: str) -> str:
"""Return doc results for a query from a given library."""
# Your implementation here
pass
if __name__ == "__main__":
mcp.run(transport="stdio")
```
**Additional Setup:**
- Add API keys in `.env` if your tool connects to web APIs.
- Define more tools/resources as needed!
### Step 3: Run and Test
Run your server in development mode:
```bash
mcp dev main.py
```
Use [MCP Inspector](https://modelcontextprotocol.io/docs/tools/inspector) to test and debug your server.
### Step 4: Integrate with Claude (or any MCP client)
- Install with `mcp install main.py`
- Correct the CLI path in `claude_desktop_config.json` if needed.
---
## Best Practices, Tips & Issues
### Best Practices
- **Update only the MCP server for new tools or doc sources.** Don't touch host/client code.
- **Respect ethical scraping and API limits** (robots.txt, fair use).
- **Use development mode and Inspector for easy debugging.**
- **Check the CLI path** ("uv" vs. absolute) for reliability on all OSes.
- **Persistent memory:** Use MCP's bidirectional protocol to enable real assistant "recall" across sessions and tasks.
### Common Issues and Solutions
- **Path Issues:** Ensure absolute paths are used in configuration files, especially on Windows.
- **Permission Errors:** Make sure the MCP server has appropriate permissions to access required resources.
- **Connection Problems:** Verify that the server is running and the configuration file paths are correct.
---
## References & Attributions
- **Original Tutorial:** Hailey Quach, "How I Finally Understood MCP—and Got It Working in Real Life", towardsdatascience.com
- [Anthropic Model Context Protocol Docs](https://modelcontextprotocol.io/introduction)
- [MCP Prebuilt Servers Repo](https://github.com/modelcontextprotocol/servers)
- [LangChain – MCP From Scratch Walkthrough](https://mirror-feeling-d80.notion.site/MCP-From-Scratch-1b9808527b178040b5baf83a991ed3b2?pvs=4)
> Large portions of this guide are adapted and condensed from Hailey Quach's tutorial. For deeper explanations, cite her work and [the full article][1] above.
---
**Enjoy building modular, maintainable AI assistants with MCP.**