Skip to main content
Glama
mcp_usage.ipynb8.88 kB
{ "cells": [ { "cell_type": "markdown", "id": "4dc81697", "metadata": {}, "source": [ "# Step 1: Activate Virtual Environment & Run the MCP Server\n", "```Run Command (From project-root directory): uv run .\\mcp_server\\main.py```" ] }, { "cell_type": "markdown", "id": "70e7d093", "metadata": { "vscode": { "languageId": "html" } }, "source": [ "# Step 2: Connect Server to MCP Client & Grab Tools" ] }, { "cell_type": "code", "execution_count": 1, "id": "cf62180f", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING: To avoid name collisions, the character '__' is used as a separator. When creating tool IDs (server__tool), always split using this same separator to recover the server and tool name correctly.\n", "WARNING: To avoid name collisions, the character '__' is used as a separator. When creating tool IDs (server__tool), always split using this same separator to recover the server and tool name correctly.\n" ] } ], "source": [ "import json\n", "from mcp_client import McpClientPool, SEPARATOR # The separator is used for avoiding duplicate tool names. \n", "\n", "# NOTE: Since each server requires it's own MCP client, \n", "# we can manage multiple clients with an MCP client pool\n", "# that holds all the connections. This all|ws us to \n", "# safely open and close all clients.\n", "mcp_pool = McpClientPool()\n", "\n", "# Setting up the server information \n", "# NOTE: Local Servers!\n", "# - Local servers will always have the /mcp\n", "# - By default, all local servers will be HTTP not HTTPS\n", "server_id = \"demo\"\n", "server_url = \"http://127.0.0.1:4000/mcp\" \n", "\n", "# Connecting to the server and saving the tools\n", "await mcp_pool.add_client(name=server_id, base_url=server_url)\n", "\n", "# NOTE: All the tools available also exist in the mcp_pool's instance field all_tools (mcp.all_tools).\n", "# However, for demo purposes we will explicitly get the tools\n", "tools = await mcp_pool.list_tools(server_id)" ] }, { "cell_type": "code", "execution_count": null, "id": "d8cbd24b", "metadata": {}, "outputs": [], "source": [ "for i, t in enumerate(tools):\n", " print(f\"Tool {i+1}: \")\n", " print(json.dumps(t, indent=4), end=\"\\n\\n\")" ] }, { "cell_type": "markdown", "id": "3ea5ecb7", "metadata": {}, "source": [ "# Step 3: Ask a question" ] }, { "cell_type": "code", "execution_count": 2, "id": "34761b48", "metadata": {}, "outputs": [], "source": [ "query = \"What is the MCP, Model Context Protocol?\"" ] }, { "cell_type": "markdown", "id": "c783240b", "metadata": {}, "source": [ "# Step 4: Pass Query and Tools to LLM & Select Best Tool" ] }, { "cell_type": "code", "execution_count": 3, "id": "7d7a9fad", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Selected Tool: demo__search_internet_and_answer\n", "Arguments: {\"query\":\"What is the MCP, Model Context Protocol?\"}\n" ] } ], "source": [ "import os\n", "from openai import AzureOpenAI\n", "from dotenv import load_dotenv\n", "load_dotenv()\n", "\n", "openai_client = AzureOpenAI(\n", " api_key=os.getenv('AZURE_API_KEY'),\n", " api_version=os.getenv('VERSION'),\n", " azure_endpoint=os.getenv('ENDPOINT'),\n", ")\n", "\n", "response = openai_client.chat.completions.create(\n", " model=os.getenv('MODEL'), \n", " messages=[\n", " # Since GPT's internal datasource is huge, sometimes the model will directly answer the query. \n", " # For demo purposes we force it to select a tool via the prompt.\n", " {\"role\": \"user\", \"content\": f\"You must select a tool from the provided tools given a query {query}\"}\n", " ],\n", " functions=tools,\n", " function_call=\"auto\" # \"auto\" lets the model decide to call the tool\n", ")\n", "\n", "selected_tool = response.choices[0].message.function_call\n", "print(f\"Selected Tool: {selected_tool.name}\")\n", "print(f\"Arguments: {selected_tool.arguments}\")" ] }, { "cell_type": "markdown", "id": "7aaa296e", "metadata": {}, "source": [ "# Step 5: Call The Tool!" ] }, { "cell_type": "code", "execution_count": 5, "id": "a825708e", "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "### What’s the MCP (Model Context Protocol) in plain English?\n", "\n", "The **MCP (Model Context Protocol)** is basically like a universal plug for connecting AI assistants (like Claude) to wherever your data is stored—Google Drive, Slack, GitHub, databases, you name it.\n", "\n", "#### Why does this even matter?\n", "AI models are often **stuck in info jail**. They’re smart, but can’t magically grab data from random places unless someone builds a custom bridge for each one. Every time you want your AI to access a new tool or database, developers have to create a custom “connector”—which is a pain, slow, and not scalable. 🤦\n", "\n", "#### Enter MCP. 🚀\n", "MCP is an **open standard** (think: everyone can use it, contribute, and build cool stuff). Instead of building a million one-off connectors, everyone uses **one protocol** to link AI models to different data sources. It’s kind of like how USB made it easy to connect all kinds of devices to your laptop—MCP does the same but for AI and your data.\n", "\n", "#### How does it work? \n", "- **Developers** can expose their data using MCP servers (these act like the “middleman”).\n", "- **AI apps** connect to these servers through MCP clients.\n", "- Anthropic is making it easy by sharing pre-made MCP servers for common platforms (Google Drive, Slack, GitHub, etc.), so you don’t have to start from scratch.\n", "\n", "#### Real world vibes:\n", "- **Block & Apollo** (big companies) are already using this to make their AI tools smarter and more connected.\n", "- **Dev tool companies** (Zed, Replit, Sourcegraph, etc.) integrate MCP so their AI agents can grab more context for coding tasks, making less “dumb” mistakes and writing better code.\n", "\n", "#### TL;DR:\n", "MCP is an open-source “universal language” for AI models to talk to any kind of data source, making your AI way more useful and less of a socially awkward robot. 🔌🤖\n", "\n", "---\n", "\n", "**Want to build with it?** You can with the Claude Desktop app, or dive into Anthropic’s open-source repos to make your own connectors! It’s collaborative, open to everyone, and built for a more connected future." ], "text/plain": [ "<IPython.core.display.Markdown object>" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from IPython.display import display, Markdown\n", "# Splitting the selected tool to obtain the server and the tool's name\n", "server_name, tool_name = selected_tool.name.split(SEPARATOR)\n", "# Loading the arguments as a JSON (str -> JSON)\n", "loaded_args = json.loads(selected_tool.arguments)\n", "\n", "# Calling the tool on our MCP Server\n", "# NOTE: method is NOT the name of the tool, \n", "# it's the name of the route on the server\n", "tool_response = await mcp_pool.call(\n", " name=server_name,\n", " method=\"tools/call\",\n", " params={\n", " \"name\": tool_name,\n", " \"arguments\": loaded_args\n", " }\n", ")\n", "\n", "answer = tool_response[0]['result']['content'][0]['text']\n", "display(Markdown(answer))" ] }, { "cell_type": "markdown", "id": "4580ff6b", "metadata": {}, "source": [ "# Step 6: Close the Pool When Finished" ] }, { "cell_type": "code", "execution_count": 6, "id": "4538c735", "metadata": {}, "outputs": [], "source": [ "await mcp_pool.close_all()" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.3" } }, "nbformat": 4, "nbformat_minor": 5 }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/BJW101102/MCP-Template'

If you have feedback or need assistance with the MCP directory API, please join our Discord server