Skip to main content
Glama

@arizeai/phoenix-mcp

Official
by Arize-ai
bedrock_tracing_tutorial.ipynb6.67 kB
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "<center>\n", " <p style=\"text-align:center\">\n", " <img alt=\"phoenix logo\" src=\"https://storage.googleapis.com/arize-phoenix-assets/assets/phoenix-logo-light.svg\" width=\"200\"/>\n", " <br>\n", " <a href=\"https://arize.com/docs/phoenix/\">Docs</a>\n", " |\n", " <a href=\"https://github.com/Arize-ai/phoenix\">GitHub</a>\n", " |\n", " <a href=\"https://arize-ai.slack.com/join/shared_invite/zt-2w57bhem8-hq24MB6u7yE_ZF_ilOYSBw#/shared-invite/email\">Community</a>\n", " </p>\n", "</center>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Instrumenting AWS Bedrock client with OpenInference and Phoenix\n", "\n", "In this tutorial we will trace model calls to AWS Bedrock using OpenInference. The OpenInference Bedrock tracer instruments the Python `boto3` library, so all `invoke_model` calls will automatically generate traces that can be sent to Phoenix.\n", "\n", "ℹ️ This notebook requires a valid AWS configuration and access to AWS Bedrock and the `claude-v2` model from Anthropic." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Install dependencies and set up OpenTelemetry tracer\n", "\n", "First install dependencies" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install arize-phoenix boto3 openinference-instrumentation-bedrock" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Import libraries" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "from urllib.parse import urljoin\n", "\n", "import boto3\n", "from openinference.instrumentation.bedrock import BedrockInstrumentor\n", "from opentelemetry.sdk.trace.export import ConsoleSpanExporter\n", "\n", "import phoenix as px\n", "from phoenix.otel import SimpleSpanProcessor, register" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Start a Pheonix server to collect traces. Be sure to view Phoenix in your browser to watch traces show up in Phoenix as they are collected." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "px.launch_app().view()\n", "session_url = px.active_session().url" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we're configuring the OpenTelemetry tracer by adding two SpanProcessors. The first SpanProcessor will simply print all traces received from OpenInference instrumentation to the console. The second will export traces to Phoenix so they can be collected and viewed." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "phoenix_otlp_endpoint = urljoin(session_url, \"v1/traces\")\n", "tracer_provider = register()\n", "tracer_provider.add_span_processor(SimpleSpanProcessor(span_exporter=ConsoleSpanExporter()))\n", "tracer_provider.add_span_processor(SimpleSpanProcessor(endpoint=phoenix_otlp_endpoint))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Instrumenting Bedrock clients\n", "\n", "Now, let's create a `boto3` session. This initiates a configured environment for interacting with AWS services. If you haven't yet configured `boto3` to use your credentials, please refer to the [official documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html). Or, if you have the AWS CLI, run `aws configure` from your terminal." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "session = boto3.session.Session()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Clients created using this session configuration are currently uninstrumented. We'll make one for comparison." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "uninstrumented_client = session.client(\"bedrock-runtime\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we instrument Bedrock with our OpenInference instrumentor. All Bedrock clients created after this call will automatically produce traces when calling `invoke_model`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "BedrockInstrumentor().instrument(skip_dep_check=True)\n", "instrumented_client = session.client(\"bedrock-runtime\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Calling the LLM and viewing OpenInference traces\n", "\n", "Calling `invoke_model` using the `uninstrumented_client` will produce no traces, but will show the output from the LLM." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prompt = b'{\"prompt\": \"Human: Hello there, how are you? Assistant:\", \"max_tokens_to_sample\": 1024}'\n", "response = uninstrumented_client.invoke_model(modelId=\"anthropic.claude-v2\", body=prompt)\n", "response_body = json.loads(response.get(\"body\").read())\n", "print(response_body[\"completion\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "LLM calls using the `instrumented_client` will print traces to the console! By configuring the `SpanProcessor` to export to a different OpenTelemetry collector, your OpenInference spans can be collected and analyzed to better understand the behavior of your LLM application." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = instrumented_client.invoke_model(modelId=\"anthropic.claude-v2\", body=prompt)\n", "response_body = json.loads(response.get(\"body\").read())\n", "print(response_body[\"completion\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "More information about our instrumentation integrations, OpenInerence can be found in our [documentation](https://arize.com/docs/phoenix/telemetry/instrumentation)" ] } ], "metadata": { "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 2 }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'

If you have feedback or need assistance with the MCP directory API, please join our Discord server