Skip to main content
Glama

@arizeai/phoenix-mcp

Official
by Arize-ai
crewai_ parallelization_tutorial.ipynb7.05 kB
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "<center>\n", " <p style=\"text-align:center\">\n", " <img alt=\"phoenix logo\" src=\"https://raw.githubusercontent.com/Arize-ai/phoenix-assets/9e6101d95936f4bd4d390efc9ce646dc6937fb2d/images/socal/github-large-banner-phoenix.jpg\" width=\"1000\"/>\n", " <br>\n", " <br>\n", " <a href=\"https://arize.com/docs/phoenix/\">Docs</a>\n", " |\n", " <a href=\"https://github.com/Arize-ai/phoenix\">GitHub</a>\n", " |\n", " <a href=\"https://join.slack.com/t/arize-ai/shared_invite/zt-1px8dcmlf-fmThhDFD_V_48oU7ALan4Q\">Community</a>\n", " </p>\n", "</center>\n", "<h1 align=\"center\">Tracing CrewAI with Arize Phoenix - Parallelization Workflow</h1>" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install -q arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp crewai crewai_tools openinference-instrumentation-crewai" ] }, { "cell_type": "markdown", "metadata": { "id": "5-gPdVmIndw9" }, "source": [ "# Set up Keys and Dependencies" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: For this colab you'll need:\n", "\n", "* OpenAI API key (https://openai.com/)\n", "* Serper API key (https://serper.dev/)\n", "* Phoenix API key (https://app.phoenix.arize.com/)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "from getpass import getpass\n", "\n", "# Prompt the user for their API keys if they haven't been set\n", "openai_key = os.getenv(\"OPENAI_API_KEY\", \"OPENAI_API_KEY\")\n", "serper_key = os.getenv(\"SERPER_API_KEY\", \"SERPER_API_KEY\")\n", "\n", "if openai_key == \"OPENAI_API_KEY\":\n", " openai_key = getpass(\"Please enter your OPENAI_API_KEY: \")\n", "\n", "if serper_key == \"SERPER_API_KEY\":\n", " serper_key = getpass(\"Please enter your SERPER_API_KEY: \")\n", "\n", "# Set the environment variables with the provided keys\n", "os.environ[\"OPENAI_API_KEY\"] = openai_key\n", "os.environ[\"SERPER_API_KEY\"] = serper_key" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if \"PHOENIX_API_KEY\" not in os.environ:\n", " os.environ[\"PHOENIX_API_KEY\"] = getpass(\"🔑 Enter your Phoenix API key: \")\n", "\n", "if \"PHOENIX_COLLECTOR_ENDPOINT\" not in os.environ:\n", " os.environ[\"PHOENIX_COLLECTOR_ENDPOINT\"] = getpass(\"🔑 Enter your Phoenix Collector Endpoint\")" ] }, { "cell_type": "markdown", "metadata": { "id": "r9X87mdGnpbc" }, "source": [ "## Configure Tracing" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from phoenix.otel import register\n", "\n", "tracer_provider = register(project_name=\"crewai-agents\")" ] }, { "cell_type": "markdown", "metadata": { "id": "vYT-EU56ni94" }, "source": [ "# Instrument CrewAI" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from openinference.instrumentation.crewai import CrewAIInstrumentor\n", "\n", "CrewAIInstrumentor().instrument(skip_dep_check=True, tracer_provider=tracer_provider)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Define your Agents" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from crewai import Agent, Crew, Task\n", "from crewai.process import Process\n", "\n", "researcher_1 = Agent(\n", " role=\"LLM Researcher A\",\n", " goal=\"Research trend #1 in AI and summarize it clearly.\",\n", " backstory=\"Specializes in model safety and governance.\",\n", " verbose=True,\n", ")\n", "\n", "researcher_2 = Agent(\n", " role=\"LLM Researcher B\",\n", " goal=\"Research trend #2 in AI and summarize it clearly.\",\n", " backstory=\"Expert in multimodal and frontier models.\",\n", " verbose=True,\n", ")\n", "\n", "researcher_3 = Agent(\n", " role=\"LLM Researcher C\",\n", " goal=\"Research trend #3 in AI and summarize it clearly.\",\n", " backstory=\"Focused on AI policy and alignment.\",\n", " verbose=True,\n", ")\n", "\n", "aggregator = Agent(\n", " role=\"Aggregator\",\n", " goal=\"Combine and synthesize all research into a single summary report.\",\n", " backstory=\"Information architect skilled at summarizing multiple sources.\",\n", " verbose=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Define your Tasks" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Define parallel research tasks\n", "task1 = Task(\n", " description=\"Summarize a major trend in AI safety and model alignment.\",\n", " expected_output=\"Concise summary of trend #1\",\n", " agent=researcher_1,\n", ")\n", "\n", "task2 = Task(\n", " description=\"Summarize a key innovation in multimodal or frontier AI systems.\",\n", " expected_output=\"Concise summary of trend #2\",\n", " agent=researcher_2,\n", ")\n", "\n", "task3 = Task(\n", " description=\"Summarize a current topic in AI policy, regulation, or social impact.\",\n", " expected_output=\"Concise summary of trend #3\",\n", " agent=researcher_3,\n", ")\n", "\n", "# Aggregation task\n", "aggregation_task = Task(\n", " description=\"Combine the three AI trend summaries into a cohesive single report.\",\n", " expected_output=\"A synthesized report capturing all three trends.\",\n", " agent=aggregator,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create Crew" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "crew = Crew(\n", " agents=[researcher_1, researcher_2, researcher_3, aggregator],\n", " tasks=[task1, task2, task3, aggregation_task],\n", " process=Process.sequential,\n", " verbose=True,\n", ")\n", "\n", "result = crew.kickoff()\n", "print(result)" ] }, { "cell_type": "markdown", "metadata": { "id": "fH0uVMgxpLql" }, "source": [ "### Check your Phoenix project to view the traces and spans from your runs." ] } ], "metadata": { "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 0 }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'

If you have feedback or need assistance with the MCP directory API, please join our Discord server