Skip to main content
Glama

@arizeai/phoenix-mcp

Official
by Arize-ai
crewai_tracing_tutorial.ipynb8.29 kB
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "<center>\n", " <p style=\"text-align:center\">\n", " <img alt=\"phoenix logo\" src=\"https://raw.githubusercontent.com/Arize-ai/phoenix-assets/9e6101d95936f4bd4d390efc9ce646dc6937fb2d/images/socal/github-large-banner-phoenix.jpg\" width=\"1000\"/>\n", " <br>\n", " <br>\n", " <a href=\"https://arize.com/docs/phoenix/\">Docs</a>\n", " |\n", " <a href=\"https://github.com/Arize-ai/phoenix\">GitHub</a>\n", " |\n", " <a href=\"https://arize-ai.slack.com/join/shared_invite/zt-2w57bhem8-hq24MB6u7yE_ZF_ilOYSBw#/shared-invite/email\">Community</a>\n", " </p>\n", "</center>\n", "<h1 align=\"center\">Tracing CrewAI with Arize Phoenix</h1>" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install -q arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp crewai crewai_tools openinference-instrumentation-crewai" ] }, { "cell_type": "markdown", "metadata": { "id": "5-gPdVmIndw9" }, "source": [ "# Launch Phoenix" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import phoenix as px\n", "\n", "session = px.launch_app()" ] }, { "cell_type": "markdown", "metadata": { "id": "r9X87mdGnpbc" }, "source": [ "# Set Up OTel" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from phoenix.otel import register\n", "\n", "tracer_provider = register(endpoint=\"http://localhost:6006/v1/traces\")" ] }, { "cell_type": "markdown", "metadata": { "id": "vYT-EU56ni94" }, "source": [ "# Instrument CrewAI" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from openinference.instrumentation.crewai import CrewAIInstrumentor\n", "\n", "CrewAIInstrumentor().instrument(skip_dep_check=True, tracer_provider=tracer_provider)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "CrewAI uses either Langchain or LiteLLM under the hood to call LLMs, depending on the package version. We recommend instrumenting the corresponding library to get visibility into the LLM calls." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! pip show crewai | grep Version" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you're using CrewAI<0.63.0, instrument Langchain:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# ! pip install openinference-instrumentation-langchain\n", "\n", "# from openinference.instrumentation.langchain import LangChainInstrumentor\n", "\n", "# LangChainInstrumentor().instrument(tracer_provider=tracer_provider)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you're using CrewAI>=0.63.0, instrument LiteLLM:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! pip install openinference-instrumentation-litellm\n", "\n", "from openinference.instrumentation.litellm import LiteLLMInstrumentor\n", "\n", "LiteLLMInstrumentor().instrument(tracer_provider=tracer_provider)" ] }, { "cell_type": "markdown", "metadata": { "id": "K50ZVtgnoI95" }, "source": [ "# Keys\n", "\n", "Note: For this colab you'll need:\n", "\n", "* OpenAI API key (https://openai.com/)\n", "* Serper API key (https://serper.dev/)\n", "\n", "\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import getpass\n", "import os\n", "\n", "# Prompt the user for their API keys if they haven't been set\n", "openai_key = os.getenv(\"OPENAI_API_KEY\", \"OPENAI_API_KEY\")\n", "serper_key = os.getenv(\"SERPER_API_KEY\", \"SERPER_API_KEY\")\n", "\n", "if openai_key == \"OPENAI_API_KEY\":\n", " openai_key = getpass.getpass(\"Please enter your OPENAI_API_KEY: \")\n", "\n", "if serper_key == \"SERPER_API_KEY\":\n", " serper_key = getpass.getpass(\"Please enter your SERPER_API_KEY: \")\n", "\n", "# Set the environment variables with the provided keys\n", "os.environ[\"OPENAI_API_KEY\"] = openai_key\n", "os.environ[\"SERPER_API_KEY\"] = serper_key" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from crewai import Agent, Crew, Process, Task\n", "from crewai_tools import SerperDevTool\n", "\n", "search_tool = SerperDevTool()\n", "\n", "# Define your agents with roles and goals\n", "researcher = Agent(\n", " role=\"Senior Research Analyst\",\n", " goal=\"Uncover cutting-edge developments in AI and data science\",\n", " backstory=\"\"\"You work at a leading tech think tank.\n", " Your expertise lies in identifying emerging trends.\n", " You have a knack for dissecting complex data and presenting actionable insights.\"\"\",\n", " verbose=True,\n", " allow_delegation=False,\n", " # You can pass an optional llm attribute specifying what model you wanna use.\n", " # llm=ChatOpenAI(model_name=\"gpt-3.5\", temperature=0.7),\n", " tools=[search_tool],\n", ")\n", "writer = Agent(\n", " role=\"Tech Content Strategist\",\n", " goal=\"Craft compelling content on tech advancements\",\n", " backstory=\"\"\"You are a renowned Content Strategist, known for your insightful and engaging articles.\n", " You transform complex concepts into compelling narratives.\"\"\",\n", " verbose=True,\n", " allow_delegation=True,\n", ")\n", "\n", "# Create tasks for your agents\n", "task1 = Task(\n", " description=\"\"\"Conduct a comprehensive analysis of the latest advancements in AI in 2024.\n", " Identify key trends, breakthrough technologies, and potential industry impacts.\"\"\",\n", " expected_output=\"Full analysis report in bullet points\",\n", " agent=researcher,\n", ")\n", "\n", "task2 = Task(\n", " description=\"\"\"Using the insights provided, develop an engaging blog\n", " post that highlights the most significant AI advancements.\n", " Your post should be informative yet accessible, catering to a tech-savvy audience.\n", " Make it sound cool, avoid complex words so it doesn't sound like AI.\"\"\",\n", " expected_output=\"Full blog post of at least 4 paragraphs\",\n", " agent=writer,\n", ")\n", "\n", "# Instantiate your crew with a sequential process\n", "crew = Crew(\n", " agents=[researcher, writer], tasks=[task1, task2], verbose=1, process=Process.sequential\n", ")\n", "\n", "# Get your crew to work!\n", "result = crew.kickoff()\n", "\n", "print(\"######################\")\n", "print(result)" ] }, { "cell_type": "markdown", "metadata": { "id": "fH0uVMgxpLql" }, "source": [ "# View Results\n", "\n", "This guide is packed in a neat notebook for convenience, however many people will host Arize Phoenix in a [Docker](https://arize.com/docs/phoenix/deployment/docker) Container or Spin up an instance on their local machine." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(f\"View traces at {session.url}\")" ] }, { "cell_type": "markdown", "metadata": { "id": "aOByjYf-XmZp" }, "source": [ " # ⭐⭐⭐ If you like this guide, please give [Arize Phoenix](https://github.com/Arize-ai/phoenix) a star ⭐⭐⭐" ] } ], "metadata": { "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 0 }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'

If you have feedback or need assistance with the MCP directory API, please join our Discord server