We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/mcmurtrya/ukraine-war-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server
MCP_Introduction.ipynb•27.4 kB
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Introduction to Model Context Protocol (MCP)\n",
"\n",
"## What is MCP?\n",
"\n",
"The **Model Context Protocol (MCP)** is an open protocol that standardizes how applications provide context to Large Language Models (LLMs). Think of it as a universal adapter that lets AI assistants connect to different data sources and tools.\n",
"\n",
"### Key Concepts\n",
"\n",
"1. **MCP Server**: Exposes data and functionality through standardized \"tools\"\n",
"2. **MCP Client**: Connects to servers and calls tools (like Claude Desktop, AI assistants)\n",
"3. **Tools**: Functions that the LLM can call to perform specific tasks\n",
"4. **Schemas**: Define the structure of inputs and outputs using Pydantic models\n",
"\n",
"### Why MCP?\n",
"\n",
"- **Reusability**: Build a tool once, use it in any MCP-compatible client\n",
"- **Composability**: Combine multiple MCP servers for complex workflows \n",
"- **Standardization**: No need to build custom integrations for each AI assistant\n",
"- **Type Safety**: Pydantic schemas ensure data validation\n",
"\n",
"## Real-World Example\n",
"\n",
"Instead of copying text into Claude, you can:\n",
"1. Build an MCP server that searches your documents\n",
"2. Connect Claude to your server\n",
"3. Ask Claude questions, and it uses your tools automatically\n",
"\n",
"This notebook demonstrates how to build MCP servers using **FastMCP**."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## MCP vs FastMCP: Understanding the Difference\n",
"\n",
"### **MCP (Model Context Protocol)** - The Protocol\n",
"- **What it is**: An open **protocol/standard** (like HTTP or REST)\n",
"- **Purpose**: Defines the rules for how applications provide context to LLMs\n",
"- **Role**: Specifies how MCP servers and clients communicate\n",
"- **Analogy**: Like the blueprint or specification for a house\n",
"\n",
"### **FastMCP** - The Python Library\n",
"- **What it is**: A Python **library/framework** for building MCP servers\n",
"- **Purpose**: Makes it easy to implement MCP servers in Python\n",
"- **Role**: Provides decorators (`@mcp.tool()`), utilities, and abstractions to quickly create MCP-compliant servers\n",
"- **Analogy**: Like Express.js is to HTTP - a framework that implements the underlying protocol\n",
"\n",
"### The Relationship\n",
"\n",
"```\n",
"MCP Protocol (the spec)\n",
" ↓ implemented by\n",
"FastMCP (Python library)\n",
" ↓ used to build\n",
"Your MCP Server (text-analyzer, document-search, etc.)\n",
" ↓ connects to\n",
"MCP Clients (Claude Desktop, AI assistants)\n",
"```\n",
"\n",
"### Other MCP Implementations\n",
"- **FastMCP**: Python-focused, rapid development (what we use in this notebook)\n",
"- **TypeScript SDK**: Official MCP SDK for Node.js/TypeScript\n",
"- **Other libraries**: Various languages, all implementing the same MCP protocol\n",
"- **Key Point**: All implementations are interoperable because they follow the same protocol\n",
"\n",
"**In this notebook**, we use **FastMCP** (the library) to build servers that follow the **MCP** (the protocol) specification.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The History and Split: A Deeper Dive\n",
"\n",
"### The Story\n",
"**FastMCP 1.0** was originally created as an independent framework and was later incorporated into the official MCP Python SDK in 2024. This is why you see `fastmcp` functionality inside the official `mcp` package at `mcp.server.fastmcp`.\n",
"\n",
"However, **FastMCP 2.0** is now the actively maintained, standalone version that exists separately from the MCP SDK. This has created **two distinct packages**:\n",
"\n",
"1. **`mcp` package** (official SDK) - Contains FastMCP 1.0 integrated as `mcp.server.fastmcp`\n",
"2. **`fastmcp` package** (standalone) - FastMCP 2.0, the current production-ready framework\n",
"\n",
"### Why You See Both Installed\n",
"\n",
"When you install `fastmcp`, it **automatically installs `mcp`** as a dependency because FastMCP 2.0 builds on top of the official SDK. This means both packages coexist in your environment:\n",
"\n",
"```python\n",
"# Two different imports, two different implementations:\n",
"from fastmcp import FastMCP # FastMCP 2.0 (standalone)\n",
"from mcp.server.fastmcp import FastMCP # FastMCP 1.0 (inside SDK)\n",
"```\n",
"\n",
"### Key Differences\n",
"\n",
"While the SDK provides core functionality, **FastMCP 2.0** delivers everything needed for production:\n",
"\n",
"- **Advanced MCP patterns**: Server composition, proxying, tool transformation\n",
"- **OpenAPI/FastAPI generation**: Automatic API documentation\n",
"- **Enterprise authentication**: Google, GitHub, WorkOS, Azure, Auth0, and more\n",
"- **Deployment tools**: Production-ready utilities\n",
"- **Testing utilities**: Comprehensive testing framework\n",
"- **Client libraries**: Full-featured MCP client implementations\n",
"\n",
"\n",
"**In this notebook and your project**, we use the standalone **`fastmcp` 2.x** package for its advanced features and active development.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## MCP Architecture: Core Concepts\n",
"\n",
"### Participants\n",
"\n",
"MCP follows a **client-server architecture** with three key participants:\n",
"\n",
"1. **MCP Host**: The AI application (e.g., Claude Desktop, VS Code) that coordinates everything\n",
"2. **MCP Client**: A component inside the host that maintains a connection to one MCP server (one-to-one relationship)\n",
"3. **MCP Server**: A program that provides context and functionality to clients\n",
"\n",
"**Key Pattern**: One host can manage multiple clients, and each client connects to exactly one server.\n",
"\n",
"```\n",
"MCP Host (e.g., Claude Desktop)\n",
" ├── MCP Client 1 ──→ MCP Server 1 (e.g., Filesystem)\n",
" ├── MCP Client 2 ──→ MCP Server 2 (e.g., Database)\n",
" └── MCP Client 3 ──→ MCP Server 3 (e.g., Your text-analyzer)\n",
"```\n",
"\n",
"### Primitives\n",
"\n",
"MCP defines **primitives** - the building blocks for what servers can offer and what clients can request.\n",
"\n",
"#### Server Primitives (What servers provide):\n",
"\n",
"1. **Tools**: Executable functions the AI can call\n",
" - Example: `search_documents(query)`, `analyze_text(text)`\n",
" - Discovered via `tools/list`, executed via `tools/call`\n",
"\n",
"2. **Resources**: Data sources that provide context\n",
" - Example: File contents, database records, API responses\n",
" - Discovered via `resources/list`, retrieved via `resources/read`\n",
"\n",
"3. **Prompts**: Reusable templates for LLM interactions\n",
" - Example: System prompts, few-shot examples\n",
" - Discovered via `prompts/list`, retrieved via `prompts/get`\n",
"\n",
"#### Client Primitives (What clients can do):\n",
"\n",
"1. **Sampling**: Request LLM completions from the client's AI\n",
"2. **Elicitation**: Request user input or confirmation\n",
"3. **Logging**: Send debug/monitoring messages to the client\n",
"\n",
"### The Initialization Exchange: Capability Discovery\n",
"\n",
"Before any work happens, the client and server perform a **handshake** to negotiate capabilities:\n",
"\n",
"**Step 1: Client sends `initialize` request**\n",
"```json\n",
"{\n",
" \"method\": \"initialize\",\n",
" \"params\": {\n",
" \"protocolVersion\": \"2025-06-18\",\n",
" \"capabilities\": {\n",
" \"elicitation\": {} // Client can handle user input requests\n",
" },\n",
" \"clientInfo\": {\n",
" \"name\": \"claude-desktop\",\n",
" \"version\": \"1.0.0\"\n",
" }\n",
" }\n",
"}\n",
"```\n",
"\n",
"**Step 2: Server responds with its capabilities**\n",
"```json\n",
"{\n",
" \"result\": {\n",
" \"protocolVersion\": \"2025-06-18\",\n",
" \"capabilities\": {\n",
" \"tools\": {\"listChanged\": true}, // Supports tools + notifications\n",
" \"resources\": {} // Supports resources\n",
" },\n",
" \"serverInfo\": {\n",
" \"name\": \"text-analyzer\",\n",
" \"version\": \"1.0.0\"\n",
" }\n",
" }\n",
"}\n",
"```\n",
"\n",
"**Why This Matters**: Capability discovery ensures both sides know what features are supported, preventing errors from unsupported operations.\n",
"\n",
"### Tool Discovery\n",
"\n",
"After initialization, the client discovers available tools:\n",
"\n",
"**Request**:\n",
"```json\n",
"{\n",
" \"method\": \"tools/list\"\n",
"}\n",
"```\n",
"\n",
"**Response**:\n",
"```json\n",
"{\n",
" \"result\": {\n",
" \"tools\": [\n",
" {\n",
" \"name\": \"analyze_text\",\n",
" \"description\": \"Analyze text and return statistics\",\n",
" \"inputSchema\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"text\": {\"type\": \"string\", \"description\": \"Text to analyze\"}\n",
" },\n",
" \"required\": [\"text\"]\n",
" }\n",
" }\n",
" ]\n",
" }\n",
"}\n",
"```\n",
"\n",
"Each tool includes:\n",
"- **name**: Unique identifier\n",
"- **description**: What it does\n",
"- **inputSchema**: JSON Schema defining expected parameters\n",
"\n",
"### Tool Execution\n",
"\n",
"Once tools are discovered, the client can execute them:\n",
"\n",
"**Request**:\n",
"```json\n",
"{\n",
" \"method\": \"tools/call\",\n",
" \"params\": {\n",
" \"name\": \"analyze_text\",\n",
" \"arguments\": {\n",
" \"text\": \"Hello world! This is a test.\"\n",
" }\n",
" }\n",
"}\n",
"```\n",
"\n",
"**Response**:\n",
"```json\n",
"{\n",
" \"result\": {\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Character count: 28, Word count: 6, Sentence count: 2\"\n",
" }\n",
" ]\n",
" }\n",
"}\n",
"```\n",
"\n",
"**The Flow**:\n",
"1. LLM decides to use a tool during conversation\n",
"2. AI application intercepts the tool call\n",
"3. MCP client routes it to the appropriate server\n",
"4. Server executes and returns results\n",
"5. Results go back to LLM as conversation context"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1: Building Your First MCP Tool\n",
"\n",
"Let's build a simple MCP server with FastMCP. We'll create a tool that analyzes text and returns basic statistics.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# First, let's import the required libraries\n",
"from fastmcp import FastMCP\n",
"from pydantic import BaseModel, Field\n",
"\n",
"# Create an MCP server instance\n",
"mcp = FastMCP(name=\"text-analyzer\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 1: Define the Output Schema\n",
"\n",
"Pydantic models define what data your tool will return. This provides:\n",
"- Type validation\n",
"- Clear documentation\n",
"- Automatic JSON serialization\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"class TextStats(BaseModel):\n",
" \"\"\"Statistics about a text passage.\"\"\"\n",
"\n",
" char_count: int = Field(..., description=\"Total number of characters\")\n",
" word_count: int = Field(..., description=\"Total number of words\")\n",
" sentence_count: int = Field(..., description=\"Total number of sentences\")\n",
" avg_word_length: float = Field(..., description=\"Average word length in characters\")\n",
"\n",
"\n",
"# Test the schema\n",
"sample = TextStats(char_count=100, word_count=20, sentence_count=3, avg_word_length=5.0)\n",
"\n",
"print(sample.model_dump_json(indent=2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 2: Implement the Tool Logic\n",
"\n",
"Now we implement the actual function that processes the text:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import re\n",
"\n",
"\n",
"def analyze_text(text: str) -> TextStats:\n",
" \"\"\"Analyze a text passage and return statistics.\n",
"\n",
" Args:\n",
" text: The text to analyze\n",
"\n",
" Returns:\n",
" TextStats: Statistics about the text\n",
" \"\"\"\n",
" # Count characters\n",
" char_count = len(text)\n",
"\n",
" # Count words (split on whitespace)\n",
" words = text.split()\n",
" word_count = len(words)\n",
"\n",
" # Count sentences (simple regex for . ! ?)\n",
" sentences = re.split(r\"[.!?]+\", text)\n",
" sentence_count = len([s for s in sentences if s.strip()])\n",
"\n",
" # Calculate average word length\n",
" if word_count > 0:\n",
" avg_word_length = sum(len(w) for w in words) / word_count\n",
" else:\n",
" avg_word_length = 0.0\n",
"\n",
" return TextStats(\n",
" char_count=char_count,\n",
" word_count=word_count,\n",
" sentence_count=sentence_count,\n",
" avg_word_length=round(avg_word_length, 2),\n",
" )\n",
"\n",
"\n",
"# Test the function\n",
"test_text = \"Hello world! This is a test. MCP is powerful.\"\n",
"result = analyze_text(test_text)\n",
"print(result.model_dump_json(indent=2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 3: Register the Tool with FastMCP\n",
"\n",
"Now we register our function as an MCP tool using the `@mcp.tool` decorator:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@mcp.tool()\n",
"def analyze_text_tool(text: str) -> TextStats:\n",
" \"\"\"Analyze a text passage and return statistics.\"\"\"\n",
" return analyze_text(text)\n",
"\n",
"\n",
"print(\"✓ Tool registered successfully!\")\n",
"print(f\"Server name: {mcp.name}\")\n",
"print(\"Available tools: analyze_text_tool\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 2: Working with TF-IDF for Document Search\n",
"\n",
"Now let's build something more practical - a document search tool using TF-IDF (Term Frequency-Inverse Document Frequency). This is similar to what you'll build in your project.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.feature_extraction.text import TfidfVectorizer\n",
"from sklearn.metrics.pairwise import cosine_similarity\n",
"\n",
"# Sample document corpus\n",
"documents = [\n",
" \"Machine learning is a subset of artificial intelligence.\",\n",
" \"Natural language processing helps computers understand human language.\",\n",
" \"Deep learning uses neural networks with multiple layers.\",\n",
" \"Data science combines statistics, programming, and domain knowledge.\",\n",
" \"Python is a popular programming language for data analysis.\",\n",
"]\n",
"\n",
"# Create TF-IDF vectorizer and build the index\n",
"vectorizer = TfidfVectorizer(stop_words=\"english\")\n",
"tfidf_matrix = vectorizer.fit_transform(documents)\n",
"\n",
"print(f\"Number of documents: {len(documents)}\")\n",
"print(f\"TF-IDF matrix shape: {tfidf_matrix.shape}\")\n",
"print(f\"Vocabulary size: {len(vectorizer.vocabulary_)}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Search function\n",
"def search_documents(query: str, top_k: int = 3):\n",
" \"\"\"Search documents using cosine similarity.\"\"\"\n",
" # Transform the query using the same vectorizer\n",
" query_vector = vectorizer.transform([query])\n",
"\n",
" # Calculate cosine similarity\n",
" similarities = cosine_similarity(query_vector, tfidf_matrix).flatten()\n",
"\n",
" # Get top k results\n",
" top_indices = similarities.argsort()[-top_k:][::-1]\n",
"\n",
" results = []\n",
" for idx in top_indices:\n",
" results.append({\"document\": documents[idx], \"score\": float(similarities[idx])})\n",
"\n",
" return results\n",
"\n",
"\n",
"# Test the search\n",
"query = \"programming and data\"\n",
"results = search_documents(query)\n",
"\n",
"print(f\"Query: '{query}'\\n\")\n",
"for i, result in enumerate(results, 1):\n",
" print(f\"{i}. Score: {result['score']:.4f}\")\n",
" print(f\" {result['document']}\\n\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 3: Building a Document Search MCP Tool\n",
"\n",
"Now let's combine everything into an MCP tool that searches documents and returns structured results:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"class SearchResult(BaseModel):\n",
" \"\"\"A single search result.\"\"\"\n",
"\n",
" document: str = Field(..., description=\"The document text\")\n",
" score: float = Field(..., description=\"Similarity score (0-1)\")\n",
" rank: int = Field(..., description=\"Rank in results (1-based)\")\n",
"\n",
"\n",
"class SearchResponse(BaseModel):\n",
" \"\"\"Response from document search.\"\"\"\n",
"\n",
" query: str = Field(..., description=\"The search query\")\n",
" results: list[SearchResult] = Field(..., description=\"List of search results\")\n",
" total_documents: int = Field(..., description=\"Total documents searched\")\n",
"\n",
"\n",
"# Create MCP tool for document search\n",
"@mcp.tool()\n",
"def search_corpus_tool(query: str, top_k: int = 3) -> SearchResponse:\n",
" \"\"\"Search the document corpus and return top matches.\"\"\"\n",
" query_vector = vectorizer.transform([query])\n",
" similarities = cosine_similarity(query_vector, tfidf_matrix).flatten()\n",
" top_indices = similarities.argsort()[-top_k:][::-1]\n",
"\n",
" results = [\n",
" SearchResult(\n",
" document=documents[idx], score=float(similarities[idx]), rank=i + 1\n",
" )\n",
" for i, idx in enumerate(top_indices)\n",
" ]\n",
"\n",
" return SearchResponse(query=query, results=results, total_documents=len(documents))\n",
"\n",
"\n",
"# Test the tool\n",
"# Note: In fastmcp 2.12.4+, decorated functions are wrapped, so use .fn to call them\n",
"response = search_corpus_tool.fn(\"machine learning neural networks\")\n",
"print(response.model_dump_json(indent=2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 4: Your Project - Text Analysis MCP Server\n",
"\n",
"This project has **two phases**:\n",
"\n",
"### Phase 1: Group Work - Baseline MCP Server\n",
"\n",
"Together as a group, you'll implement an MCP server with **two baseline tools**:\n",
"\n",
"**Tool 1: `corpus_answer`**\n",
"- **Input**: A question/query string\n",
"- **Process**: Search your corpus using TF-IDF, find most relevant documents\n",
"- **Output**: Answer with citations (document references and snippets)\n",
"\n",
"**Tool 2: `text_profile`**\n",
"- **Input**: Text or document ID\n",
"- **Process**: Analyze the text for various features\n",
"- **Output**: Text profile including:\n",
" - Character/word/sentence counts\n",
" - Readability score (Flesch Reading Ease)\n",
" - Sentiment analysis (VADER)\n",
" - Top keywords/n-grams\n",
" - Type-token ratio\n",
"\n",
"### Phase 2: Individual Work - Your Custom Tool\n",
"\n",
"After completing the baseline tools, **each student** will:\n",
"\n",
"1. **Create a feature branch** for your custom tool\n",
"2. **Design and implement** a non-trivial MCP tool relevant to your field\n",
" - Examples: policy analysis, data transformation, citation extraction, medical terminology extraction, etc.\n",
"3. **Write tests** for your tool\n",
"4. **Demo your tool** showing its real-world application\n",
"\n",
"### Key Skills You'll Learn\n",
"\n",
"1. **Pydantic Schemas**: Define structured data models\n",
"2. **TF-IDF Search**: Build a document search engine\n",
"3. **Text Analytics**: Calculate readability, sentiment, and linguistic features\n",
"4. **FastMCP**: Create production-ready MCP servers\n",
"5. **Domain Application**: Apply MCP concepts to your own field\n",
"\n",
"The goal is to understand MCP fundamentals together, then create something **unique and useful** for your domain.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Testing Your MCP Server\n",
"\n",
"### Step 1: Quick Function Test\n",
"\n",
"The easiest way to verify your MCP server works is to test the tools directly:\n",
"\n",
"```bash\n",
"# Inside the Docker container\n",
"make run-interactive\n",
"uv run python tests/manual_server_test.py\n",
"```\n",
"\n",
"You should see output showing both tools working correctly. **This is sufficient for your project demo!**\n",
"\n",
"---\n",
"\n",
"### Step 2: (Optional) MCP Inspector for Advanced Testing\n",
"\n",
"For those who want to test the full MCP protocol (optional, not required):\n",
"\n",
"**Quick Setup:**\n",
"```bash\n",
"# Terminal 1: Start Docker container\n",
"make run-interactive\n",
"\n",
"# Terminal 2: Run Inspector on HOST \n",
"npx @modelcontextprotocol/inspector\n",
"# In browser: STDIO transport, command: ./run_mcp_server.sh\n",
"```\n",
"\n",
"**Note**: Requires Node.js on your HOST machine. See README for installation instructions.\n",
"\n",
"---\n",
"\n",
"### What to Demo\n",
"\n",
"**For the baseline tools (group work):**\n",
"1. Run `python tests/manual_server_test.py` showing both tools work\n",
"2. Briefly explain your implementation approach\n",
"\n",
"**For your custom tool (individual work - this is the main focus!):**\n",
"1. **Tool demonstration** - Show your custom tool working with real examples\n",
"2. **Code walkthrough** - Explain:\n",
" - What problem it solves in your domain\n",
" - How you designed the Pydantic schemas\n",
" - Key implementation decisions\n",
"3. **Test results** - Show your tests passing\n",
"4. **Real-world application** - Explain how this tool could be used in your field\n",
"\n",
"**The key**: Focus on your **custom tool's implementation** and **domain application**!\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next Steps: Your Project Timeline\n",
"\n",
"### Phase 1: Group Work - Complete the Notebook & Baseline Tools\n",
"\n",
"1. **Work through this notebook together** (Parts 1-3)\n",
" - Understand Pydantic schemas and MCP tool patterns\n",
" - Learn TF-IDF for document search\n",
" - Practice with the examples\n",
"\n",
"2. **Implement baseline tools as a group**\n",
" - `corpus_answer` - Document search with TF-IDF\n",
" - `text_profile` - Text analytics\n",
" - Write tests and verify they pass\n",
"\n",
"3. **Test the baseline server**\n",
" - Run `make test` to verify unit tests pass\n",
" - Run `python tests/manual_server_test.py` to test MCP integration\n",
"\n",
"### Phase 2: Individual Work - Your Custom Tool\n",
"\n",
"1. **Create your feature branch**\n",
" ```bash\n",
" git checkout -b student/your-name-tool-name\n",
" ```\n",
"\n",
"2. **Design your custom tool**\n",
" - Choose a non-trivial tool relevant to your field\n",
" - Design Pydantic schemas for inputs/outputs\n",
" - Plan your implementation\n",
"\n",
"3. **Implement and test**\n",
" - Create your tool in `src/mcp_server/tools/`\n",
" - Register it in `server.py`\n",
" - Write tests in `tests/mcp_server/`\n",
" - Verify everything works with `make test`\n",
"\n",
"4. **Prepare your demo**\n",
" - Show your custom tool working\n",
" - Explain your design and domain application\n",
" - Present test results\n",
"\n",
"### Resources\n",
"\n",
"**MCP & FastMCP**:\n",
"- [FastMCP Documentation](https://gofastmcp.com)\n",
"- [Model Context Protocol](https://modelcontextprotocol.io)\n",
"- [MCP Inspector GitHub](https://github.com/modelcontextprotocol/inspector)\n",
"\n",
"**Python & Data Science**:\n",
"- [Pydantic Tutorial](https://docs.pydantic.dev)\n",
"- [Scikit-learn TF-IDF](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html)\n",
"- [VADER Sentiment](https://github.com/cjhutto/vaderSentiment)\n",
"- [Textstat Readability](https://pypi.org/project/textstat/)\n",
"\n",
"### Getting Help\n",
"- Check the `src/mcp_server/` directory for code scaffolding\n",
"- Test incrementally - build one tool at a time\n",
"- Use `make test` to run pytest unit tests\n",
"- Use MCP Inspector to debug server issues\n",
"- Review server logs when tools don't work as expected\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.11"
}
},
"nbformat": 4,
"nbformat_minor": 2
}