Skip to main content
Glama

MCP Template Server

by BJW101102
mcp_usage.ipynb5.57 kB
{ "cells": [ { "cell_type": "markdown", "id": "4dc81697", "metadata": {}, "source": [ "# Step 1: Activate Virtual Environment & Run the MCP Server\n", "```Run Command (From project-root directory): uv run .\\mcp_server\\main.py```" ] }, { "cell_type": "markdown", "id": "70e7d093", "metadata": { "vscode": { "languageId": "html" } }, "source": [ "# Step 2: Connect Server to MCP Client & Grab Tools" ] }, { "cell_type": "code", "execution_count": null, "id": "cf62180f", "metadata": {}, "outputs": [], "source": [ "import json\n", "from mcp_client import McpClientPool, SEPARATOR # The separator is used for avoiding duplicate tool names. \n", "\n", "# NOTE: Since each server requires it's own MCP client, \n", "# we can manage multiple clients with an MCP client pool\n", "# that holds all the connections. This all|ws us to \n", "# safely open and close all clients.\n", "mcp_pool = McpClientPool()\n", "\n", "# Setting up the server information \n", "# NOTE: Local Servers!\n", "# - Local servers will always have the /mcp\n", "# - By default, all local servers will be HTTP not HTTPS\n", "server_id = \"demo\"\n", "server_url = \"http://127.0.0.1:4000/mcp\" \n", "\n", "# Connecting to the server and saving the tools\n", "await mcp_pool.add_client(name=server_id, base_url=server_url)\n", "\n", "# NOTE: All the tools available also exist in the mcp_pool's instance field all_tools (mcp.all_tools).\n", "# However, for demo purposes we will explicitly get the tools\n", "tools = await mcp_pool.list_tools(server_id)" ] }, { "cell_type": "code", "execution_count": null, "id": "d8cbd24b", "metadata": {}, "outputs": [], "source": [ "for i, t in enumerate(tools):\n", " print(f\"Tool {i+1}: \")\n", " print(json.dumps(t, indent=4), end=\"\\n\\n\")" ] }, { "cell_type": "markdown", "id": "3ea5ecb7", "metadata": {}, "source": [ "# Step 3: Ask a question" ] }, { "cell_type": "code", "execution_count": null, "id": "34761b48", "metadata": {}, "outputs": [], "source": [ "query = \"What is the MCP, Model Context Protocol?\"" ] }, { "cell_type": "markdown", "id": "c783240b", "metadata": {}, "source": [ "# Step 4: Pass Query and Tools to LLM & Select Best Tool" ] }, { "cell_type": "code", "execution_count": null, "id": "7d7a9fad", "metadata": {}, "outputs": [], "source": [ "import os\n", "from openai import AzureOpenAI\n", "from dotenv import load_dotenv\n", "load_dotenv()\n", "\n", "openai_client = AzureOpenAI(\n", " api_key=os.getenv('AZURE_API_KEY'),\n", " api_version=os.getenv('VERSION'),\n", " azure_endpoint=os.getenv('ENDPOINT'),\n", ")\n", "\n", "response = openai_client.chat.completions.create(\n", " model=os.getenv('MODEL'), \n", " messages=[\n", " # Since GPT's internal datasource is huge, sometimes the model will directly answer the query. \n", " # For demo purposes we force it to select a tool via the prompt.\n", " {\"role\": \"user\", \"content\": f\"You must select a tool from the provided tools given a query {query}\"}\n", " ],\n", " functions=tools,\n", " function_call=\"auto\" # \"auto\" lets the model decide to call the tool\n", ")\n", "\n", "selected_tool = response.choices[0].message.function_call\n", "print(f\"Selected Tool: {selected_tool.name}\")\n", "print(f\"Arguments: {selected_tool.arguments}\")" ] }, { "cell_type": "markdown", "id": "7aaa296e", "metadata": {}, "source": [ "# Step 5: Call The Tool!" ] }, { "cell_type": "code", "execution_count": null, "id": "a825708e", "metadata": {}, "outputs": [], "source": [ "from IPython.display import display, Markdown\n", "# Splitting the selected tool to obtain the server and the tool's name\n", "server_name, tool_name = selected_tool.name.split(SEPARATOR)\n", "# Loading the arguments as a JSON (str -> JSON)\n", "loaded_args = json.loads(selected_tool.arguments)\n", "\n", "# Calling the tool on our MCP Server\n", "# NOTE: method is NOT the name of the tool, \n", "# it's the name of the route on the server\n", "tool_response = await mcp_pool.call(\n", " name=server_name,\n", " method=\"tools/call\",\n", " params={\n", " \"name\": tool_name,\n", " \"arguments\": loaded_args\n", " }\n", ")\n", "\n", "answer = tool_response[0]['result']['content'][0]['text']\n", "display(Markdown(answer))" ] }, { "cell_type": "markdown", "id": "4580ff6b", "metadata": {}, "source": [ "# Step 6: Close the Pool When Finished" ] }, { "cell_type": "code", "execution_count": null, "id": "4538c735", "metadata": {}, "outputs": [], "source": [ "await mcp_pool.close_all()" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.3" } }, "nbformat": 4, "nbformat_minor": 5 }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/BJW101102/MCP-Template'

If you have feedback or need assistance with the MCP directory API, please join our Discord server