Skip to main content
Glama

@arizeai/phoenix-mcp

Official
by Arize-ai
phoenix_overview.ipynb49.2 kB
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "<center>\n", " <p style=\"text-align:center\">\n", " <img alt=\"phoenix logo\" src=\"https://raw.githubusercontent.com/Arize-ai/phoenix-assets/9e6101d95936f4bd4d390efc9ce646dc6937fb2d/images/socal/github-large-banner-phoenix.jpg\" width=\"1000\"/>\n", " <br>\n", " <br>\n", " <a href=\"https://arize.com/docs/phoenix/\">Docs</a>\n", " |\n", " <a href=\"https://github.com/Arize-ai/phoenix\">GitHub</a>\n", " |\n", " <a href=\"https://arize-ai.slack.com/join/shared_invite/zt-2w57bhem8-hq24MB6u7yE_ZF_ilOYSBw#/shared-invite/email\">Community</a>\n", " </p>\n", "</center>\n", "<h1 align=\"center\">Arize Phoenix</h1>\n", "\n", "Arize Phoenix is a fully open-source AI observability platform. It's designed for experimentation, evaluation, and troubleshooting. It provides:\n", "\n", "- [**_Tracing_**](https://arize.com/docs/phoenix/tracing/llm-traces) - Trace your LLM application's runtime using OpenTelemetry-based instrumentation.\n", "- [**_Evaluation_**](https://arize.com/docs/phoenix/evaluation/llm-evals) - Leverage LLMs to benchmark your application's performance using response and retrieval evals.\n", "- [**_Datasets_**](https://arize.com/docs/phoenix/datasets-and-experiments/overview-datasets) - Create versioned datasets of examples for experimentation, evaluation, and fine-tuning.\n", "- [**_Experiments_**](https://arize.com/docs/phoenix/datasets-and-experiments/overview-datasets#experiments) - Track and evaluate changes to prompts, LLMs, and retrieval.\n", "- [**_Playground_**](https://arize.com/docs/phoenix/prompt-engineering/overview-prompts)- Optimize prompts, compare models, adjust parameters, and replay traced LLM calls.\n", "- [**_Prompt Management_**](https://arize.com/docs/phoenix/prompt-engineering/overview-prompts/prompt-management)- Manage and test prompt changes systematically using version control, tagging, and experimentation.\n", "\n", "Phoenix is vendor and language agnostic with out-of-the-box support for popular frameworks and AI providers.\n", "<center>\n", " <p style=\"text-align:center\">\n", " <img alt=\"First-class support for various frameworks and ai providers\" src=\"https://storage.googleapis.com/arize-phoenix-assets/assets/images/openinference_integrations.jpg\" width=\"1000\"/>\n", " </p>\n", "</center>\n", "\n", "Phoenix runs practically anywhere, including your local machine, a Jupyter notebook, a containerized deployment, or in the cloud.\n", "\n", "<center>\n", " <p style=\"text-align:center\">\n", " <img alt=\"First-class support for various frameworks and ai providers\" src=\"https://storage.googleapis.com/arize-phoenix-assets/assets/images/deployment_strategies.png\" width=\"1000\"/>\n", " </p>\n", "</center>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The most important thing about choosing a good observability and evaluation tool is first: \"Does the tool help me build good and responsible AI systems?\" All modern platforms will and can do this. The things that make Phoenix somewhat unique are:\n", "\n", "- 🌎 It's fully open-source and its development is driven heavily by developer feedback\n", "- 🔐 It's privacy first, where the data is easily accessible inside your VPC or computer\n", "- 🕊️ It has no feature gates and strives to maximize value for its users\n", "- ⚙️ It's designed to be customizable to your needs through APIs and SDKs\n", "- ✌️ Built on open standards and protocols like OTEL\n", "- 💸 It's free - because its goal is to be a platform built by developers for developers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# The AI Problem\n", "<p style=\"text-align:center\">\n", " <img alt=\"AI dev as scientific method\" src=\"https://storage.googleapis.com/arize-phoenix-assets/assets/gifs/20250524_1125_Forest%20Robots%20Interaction_simple_compose_01jw1n770bep1a829kw3cvvcsc.gif\" width=\"80%\" />\n", "</p>\n", "The hard truth: Building great AI native products requires a rigorous evaluation process.\n", "\n", "Talking to an LLM can feel like talking to a new species. We don't think this is an accident. In many ways we are AI scientists observing emergent behavior and the AI development cycle really is the scientific method in disguise. Just as scientists meticulously record experiments and take detailed notes to advance their understanding, AI systems require rigorous observation through tracing, annotations, and experimentation to reach their full potential. The goal of AI-native products is to build tools that empower humans, and it requires careful human judgment to align AI with human preferences and values.\n", "\n", " <p style=\"text-align:center\">\n", " <img src=\"https://storage.googleapis.com/arize-phoenix-assets/assets/images/scientific_method.png\" width=\"80%\">\n", "</p>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 👷‍♀️ Let's build an App\n", "\n", "Let's build an App that uses common LLM prompting techniques. Specifically, let's try to get an LLM to produce structured output. Let's tackle a particularly messy problem - getting an LLM to produce SQL.\n", "\n", "We are going to build a simple agent that can answer movie trivia. While this can probably be performed by an LLM, we are going to force the LLM to look up the movie trivia from a SQL database. You can imagine this technique could be very useful if you wanted to expose an internal knowledge store to your agent.\n", "\n", "<p style=\"text-align: center\">\n", " <img src=\"https://storage.googleapis.com/arize-assets/phoenix/assets/images/txt_2_sql.png\" style=\"width: 80%\" />\n", "</p>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 🎥 Tracing\n", "Just like scientists, every AI engineer needs a great camera. For this we will use OpenTelemetry. Telemetry produces traces of your LLM, Tools, and more.\n", "\n", "OpenTelemetry helps to capture the inputs and outputs to our LLM system. We want to trace enough parts of our system so that we can debug failure modes and perform error analysis.\n", "\n", "Let's roll camera." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install -U \"arize-phoenix-otel\" \"arize-phoenix-client>=1.20.0\" \"arize-phoenix-evals>=2.3.0\" openai 'httpx<0.28' duckdb datasets pyarrow \"pydantic>=2.0.0\" nest_asyncio \"openinference-instrumentation>=0.1.38\" openinference-instrumentation-openai --quiet" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This tutorial assumes you have a locally running Phoenix server. We can think of phoenix like a video recorder, observing every activity of your AI application.\n", "\n", "```shell\n", "phoenix serve\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from phoenix.otel import register\n", "\n", "tracer_provider = register(\n", " project_name=\"movie-app\",\n", " endpoint=\"http://localhost:6006/v1/traces\",\n", " verbose=False,\n", " auto_instrument=True, # Start recording traces via OpenAIInstrumentor\n", ")\n", "\n", "tracer = tracer_provider.get_tracer(__name__)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lastly, let's make sure we have our OpenAI API key set up." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "from getpass import getpass\n", "\n", "if not os.getenv(\"OPENAI_API_KEY\"):\n", " os.environ[\"OPENAI_API_KEY\"] = getpass(\"🔑 Enter your OpenAI API key: \")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 🗄️ Download Movie Data\n", "\n", "We are going to use a movie dataset that contains recent titles and their ratings. We will use DuckDB as our SQL database so that we can run the queries directly in the notebook, but you can imagine that this could be a pre-existing SQL database with business-specific data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import duckdb\n", "from datasets import load_dataset\n", "\n", "data = load_dataset(\"wykonos/movies\")[\"train\"]\n", "\n", "conn = duckdb.connect(database=\":memory:\", read_only=False)\n", "conn.register(\"movies\", data.to_pandas())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "records = conn.query(\"SELECT * FROM movies LIMIT 10\").to_df().to_dict(orient=\"records\")\n", "\n", "for record in records:\n", " print(record)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Convert Human Questions -> SQL (text-to-sql)\n", "\n", "Let's use an LLM to take human questions and to convert it into SQL so we can query the data above. Note that the prompt does a few specific things:\n", "\n", "- We need to tell the LLM what our database table looks like. Let's pass it the columns and the column types\n", "- We want the output to be pure SQL (select * from ...). LLMs tend to respond in markdown. Let's try to make sure it doesn't" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "import openai\n", "\n", "from phoenix.client import AsyncClient\n", "from phoenix.client.types import PromptVersion\n", "\n", "px_client = AsyncClient()\n", "client = openai.AsyncClient()\n", "\n", "columns = conn.query(\"DESCRIBE movies\").to_df().to_dict(orient=\"records\")\n", "\n", "# We will use GPT4o to start\n", "TASK_MODEL = \"gpt-4o\"\n", "CONFIG = {\"model\": TASK_MODEL}\n", "\n", "system_prompt = f\"\"\"\n", "You are a SQL expert who takes user queries and transforms them into a SQL query to be executed.\n", "\n", "You are given a table named `movies` with the following columns and types:\n", "\n", "{\",\".join(column[\"column_name\"] + \": \" + column[\"column_type\"] for column in columns)}\n", "\n", "Write a raw DuckDB SQL query corresponding to the user's question. Return only a SQL query\n", "with no formatting. The response SHOULD NOT include backticks or markdown formatting.\n", "\n", "[BEGIN EXAMPLES]\n", "************\n", "[BAD RESPONSES]\n", "***************\n", "- `SELECT * FROM movies`\n", "- sql```SELECT * FROM movies``\n", "- here is the sql: SELECT * FROM movies\n", "***************\n", "[GOOD RESPONSES]\n", "***************\n", "- SELECT * FROM movies\n", "***************\n", "[END EXAMPLES]\n", "\"\"\"\n", "\n", "prompt_template = await px_client.prompts.create(\n", " name=\"movie-text-to-sql\",\n", " version=PromptVersion(\n", " [\n", " {\n", " \"role\": \"system\",\n", " \"content\": system_prompt,\n", " },\n", " {\n", " \"role\": \"user\",\n", " \"content\": \"{{question}}\",\n", " },\n", " ],\n", " description=\"Initial prompt for text-to-sql\",\n", " model_name=TASK_MODEL,\n", " ),\n", ")\n", "\n", "\n", "@tracer.chain\n", "async def generate_sql(question):\n", " # Vendor agnostic - can directly use OpenAI\n", " prompt = prompt_template.format(variables={\"question\": question}, sdk=\"openai\")\n", " response = await client.chat.completions.create(\n", " **prompt,\n", " temperature=0,\n", " )\n", " return response.choices[0].message.content" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "query = await generate_sql(\"What is the top grossing movie?\")\n", "print(query)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looks like the LLM is producing SQL. Let's try running the query against the database and see if we get the expected results. Just because the SQL query looks valid doesn't mean it's correct.\n", "\n", "Note: we again wrap this function in a decorator and denote that this is a tool that the LLM is using. While not explicitly a tool call, it's largely the same paradigm." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import math\n", "\n", "\n", "def sanitize_records(records):\n", " return {k: None if isinstance(v, float) and math.isnan(v) else v for k, v in records.items()}\n", "\n", "\n", "@tracer.tool\n", "def execute_sql(query):\n", " records = conn.query(query).fetchdf().to_dict(orient=\"records\")\n", " return list(map(sanitize_records, records))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "execute_sql(query)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's put the pieces together and see if we can create a movie agent that feels helpful. Here we are performing very simple RAG where the SQL query results are being passed to an LLM to synthesize a human-friendly answer." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@tracer.chain\n", "async def query_db(question): # noqa: F811\n", " sql = await generate_sql(question)\n", " results = execute_sql(sql)\n", " return {\n", " \"sql\": sql,\n", " \"results\": results,\n", " }" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "synthesis_system_prompt = \"\"\"\n", "You are a helpful assistant that can answer questions about movies. You are charming, witty, honest, and interesting.\n", "\n", "Answer the question based on the SQL results. Do not rely on your internal knowledge.\n", "\n", "Do not use SQL or abbreviations for genres or languages. Use an informative, concise voice.\n", "Your response should be purely in natural language, do not include any SQL or other technical details.\n", "\n", "If the SQL results are empty, say you don't know.\n", "\"\"\"\n", "\n", "synthesis_user_prompt_template = \"\"\"\n", "Answer the question based on the SQL results.\n", "\n", "[BEGIN DATA]\n", "************\n", "[Question]: {{question}}\n", "************\n", "[SQL Results]: {{results}}\n", "************\n", "[END DATA]\n", "\n", "Answer:\n", "\"\"\"\n", "\n", "synthesis_prompt_template = await px_client.prompts.create(\n", " name=\"movie-synthesis\",\n", " version=PromptVersion(\n", " [\n", " {\n", " \"role\": \"system\",\n", " \"content\": synthesis_system_prompt,\n", " },\n", " {\n", " \"role\": \"user\",\n", " \"content\": synthesis_user_prompt_template,\n", " },\n", " ],\n", " description=\"Initial prompt for synthesis\",\n", " model_name=TASK_MODEL,\n", " ),\n", ")\n", "\n", "\n", "@tracer.agent\n", "async def movie_agent(question):\n", " sql_response = await query_db(question)\n", " prompt = synthesis_prompt_template.format(\n", " variables={\"question\": question, \"results\": str(sql_response[\"results\"])}, sdk=\"openai\"\n", " )\n", " answer = await client.chat.completions.create(**prompt)\n", " return answer.choices[0].message.content" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "await movie_agent(\"What is the top grossing movie?\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looks like we have a working movie expert. Or do we? Let's double check. Let's run the agent over some examples we think the agent should be able to answer." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "questions = [\n", " \"Which Brad Pitt movie received the highest rating?\",\n", " \"What is the top grossing Marvel movie?\",\n", " \"What foreign-language fantasy movie was the most popular?\",\n", " \"what are the best sci-fi movies of 2017?\",\n", " \"What anime topped the box office in the 2010s?\",\n", " \"Recommend a romcom that stars Paul Rudd.\",\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's run the above queries against our agent and record it under a project as a \"baseline\" so we can see if we can improve it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from openinference.instrumentation import dangerously_using_project\n", "\n", "with dangerously_using_project(project_name=\"movie-agent-baseline\"):\n", " for question in questions:\n", " try:\n", " answer = await movie_agent(question)\n", " print(\"Question: \", question)\n", " print(\"Answer: \", answer)\n", " print(\"\\n\")\n", " except Exception as e:\n", " print(e)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the data and annotate it to see what the issues might be. Go to Settings > Annotations and add a correctness annotation config. Configure it as a categorical annotation with two categories, `correct` and `incorrect`. We can now quickly annotate the 7 traces (e.g. the agent spans) above as `correct` or `incorrect`. Once we've annotated some data we can bring it back into the notebook to analyze it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from phoenix.client import AsyncClient\n", "from phoenix.client.types.spans import SpanQuery\n", "\n", "px_client = AsyncClient()\n", "query = SpanQuery().where(\"name == 'movie_agent'\")\n", "\n", "spans_df = await px_client.spans.get_spans_dataframe(\n", " project_identifier=\"movie-agent-baseline\", query=query\n", ")\n", "annotations_df = await px_client.spans.get_span_annotations_dataframe(\n", " spans_dataframe=spans_df, project_identifier=\"movie-agent-baseline\"\n", ")\n", "\n", "combined_df = annotations_df.join(spans_df, how=\"inner\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "examples_df = combined_df[\n", " [\"annotation_name\", \"result.label\", \"attributes.input.value\", \"attributes.output.value\"]\n", "].head()\n", "examples_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see if we can create an LLM judge that aligns with our human annotations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "example_answers = \"\\n\\n\".join(\n", " [\n", " f\"Question: {example['attributes.input.value']}\\nAnswer: {example['attributes.output.value']}\\nLabel: {example['result.label']}\"\n", " for example in examples_df.to_dict(orient=\"records\")\n", " ]\n", ")\n", "eval_prompt = f\"\"\"\n", "You are an expert evaluator of question and answer pairs. You will be given a human question and an answer from an AI agent.\n", "Your job is to determine if the answer is \"correct\" or \"incorrect\" and to provide a clear reason why the label should be assigned.\n", "\n", "Here are some examples of correct and incorrect answers:\n", "<examples>\n", "{example_answers}\n", "</examples>\n", "\n", "<data>\n", "<question>\n", "{{attributes.input.value}}\n", "</question>\n", "<answer>\n", "{{attributes.output.value}}\n", "</answer>\n", "</data>\n", "\"\"\"\n", "\n", "print(eval_prompt)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "spans_df[[\"attributes.input.value\", \"attributes.output.value\"]].head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from phoenix.evals.evaluators import create_classifier\n", "from phoenix.evals.llm import LLM\n", "\n", "# Define a classification based evaluation\n", "llm_correctness = create_classifier(\n", " name=\"llm_correctness\",\n", " llm=LLM(model=\"gpt-4o\", provider=\"openai\"),\n", " prompt_template=eval_prompt,\n", " choices={\"correct\": 1, \"incorrect\": 0},\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from phoenix.evals import async_evaluate_dataframe\n", "\n", "evals_df = await async_evaluate_dataframe(dataframe=spans_df, evaluators=[llm_correctness])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "evals_df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from phoenix.client import AsyncClient\n", "from phoenix.evals.utils import to_annotation_dataframe\n", "\n", "px_client = AsyncClient()\n", "await px_client.spans.log_span_annotations_dataframe(\n", " dataframe=to_annotation_dataframe(evals_df),\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 🧪Experimentation\n", "\n", "The velocity AI application development is bottlenecked by high quality evaluations because engineers are often faced with hard trade-offs: which prompt or LLM best balances performance, latency, and cost. Quality Evaluations are critical as they help answer these types of questions with greater confidence.\n", "\n", "Evaluation consists of three parts — data, task, and evals. We'll start with data.\n", "\n", "<p style=\"text-align: center\">\n", "<img src=\"https://storage.googleapis.com/arize-phoenix-assets/assets/images/experiment_analogy.png\" width=\"800\">\n", "</p>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's store the movie questions we created above as a versioned dataset in phoenix." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "ds = await px_client.datasets.create_dataset(\n", " name=\"movie-train\",\n", " dataframe=pd.DataFrame([{\"question\": question} for question in questions]),\n", " input_keys=[\"question\"],\n", " output_keys=[],\n", ")\n", "\n", "# If you have already uploaded the dataset, you can fetch it using the following line\n", "# ds = await px_client.datasets.get_dataset(dataset=\"movie-train\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we'll define the task. The task is to generate SQL queries from natural language questions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@tracer.chain\n", "async def query_db(question): # noqa: F811\n", " query = await generate_sql(question)\n", " results = execute_sql(query)\n", " return {\n", " \"query\": query,\n", " \"results\": results,\n", " }" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "res = await query_db(\"What are the top Sci-Fi movies?\")\n", "for row in res[\"results\"]:\n", " print(row)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we'll define the evaluators. We'll use the following simple function that produces 1 if we got results and 0 if not." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Test if the query has results\n", "def has_results(output):\n", " results = output.get(\"results\")\n", " has_results = results is not None and len(results) > 0\n", " return 1.0 if has_results else 0.0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's run the experiment. To run the experiment, we pass the dataset of examples, the task which runs the SQL generation, and the evals described above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from phoenix.client.experiments import async_run_experiment\n", "\n", "\n", "# Define the task to run query_db on the input question\n", "async def task(input):\n", " return await query_db(input[\"question\"])\n", "\n", "\n", "experiment = await async_run_experiment(\n", " dataset=ds,\n", " task=task,\n", " evaluators=[has_results],\n", " experiment_metadata=CONFIG,\n", " experiment_name=\"baseline\",\n", " repetitions=3,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ok. Not looking very good. It looks like only 4 out 6 of our questions are yielding results. Let's dig in to see how we can fix these.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Interpreting the results\n", "\n", "Now that we ran the initial evaluation, it looks like 2 of the results are empty due to getting the genre wrong.\n", "\n", "- `Sci-Fi` needs to be queried as `Science Fiction`\n", "- `Anime` needs to be queried as `Animation` + language specification.\n", "\n", "These two issues would probably be improved by showing a sample of the data to the model (e.g. few shot example) since the data will show the LLM what is queryable." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try to improve the prompt with few-shot examples and see if we can get better results." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "samples = conn.query(\"SELECT * FROM movies LIMIT 5\").to_df().to_dict(orient=\"records\")\n", "\n", "example_row = \"\\n\".join(\n", " f\"{column['column_name']} | {column['column_type']} | {samples[0][column['column_name']]}\"\n", " for column in columns\n", ")\n", "\n", "column_header = \" | \".join(column[\"column_name\"] for column in columns)\n", "\n", "few_shot_examples = \"\\n\".join(\n", " \" | \".join(str(sample[column[\"column_name\"]]) for column in columns) for sample in samples\n", ")\n", "\n", "system_prompt = f\"\"\"\n", "You are a SQL expert who takes user queries and transforms them into a SQL query to be executed.\n", "\n", "You are given a table named `movies` with the following columns:\n", "\n", "[BEGIN EXAMPLES]\n", "************\n", "Column | Type | Example\n", "-------|------|--------\n", "{example_row}\n", "************\n", "[Example table rows]\n", "{column_header}\n", "{few_shot_examples}\n", "************\n", "[END EXAMPLES]\n", "\n", "Write a raw DuckDB SQL query corresponding to the user's question. Return only the raw SQL query\n", "with no formatting. The response SHOULD NOT include backticks or markdown formatting. Never query for more than 10 rows.\n", "\n", "BAD RESPONSES:\n", "- `SELECT * FROM movies`\n", "- sql```SELECT * FROM movies``\n", "- here is the sql: SELECT * FROM movies\n", "\n", "GOOD RESPONSES:\n", "- SELECT * FROM movies\n", "\"\"\"\n", "\n", "prompt_template = await px_client.prompts.create(\n", " name=\"movie-text-to-sql\",\n", " version=PromptVersion(\n", " [\n", " {\n", " \"role\": \"system\",\n", " \"content\": system_prompt,\n", " },\n", " {\n", " \"role\": \"user\",\n", " \"content\": \"{{question}}\",\n", " },\n", " ],\n", " description=\"Add few shot examples to the prompt\",\n", " model_name=TASK_MODEL,\n", " ),\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(await generate_sql(\"What is the best Sci-Fi movies of 2017?\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looking much better! Since the prompt shows that \"Sci-Fi\" is represented as \"Science Fiction\", the LLM is able to synthesize the right where clause.\n", "\n", "Pro-tip: You can try out the prompt in the playground even before the next step!\n", "\n", "Let's run the experiment again." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "experiment = await async_run_experiment(\n", " dataset=ds,\n", " experiment_name=\"with examples\",\n", " task=task,\n", " evaluators=[has_results],\n", " experiment_metadata=CONFIG,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looks much improved. It looks like we're getting data our of our system. But just because we are getting info out of the DB doesn't mean these records are useful. Let's construct an LLM judge to see if the results are relevant to the question." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from phoenix.client.experiments import async_evaluate_experiment\n", "from phoenix.evals import create_classifier\n", "\n", "judge_prompt_template = \"\"\"\n", "You are a judge that determines if a given question can be answered with the SQL results.\n", "\n", "Provide the label `useful` if the SQL results contain records that help answer the question.\n", "Provide the label `useless` if the SQL results do not contain records that help answer the question.\n", "\n", "<data>\n", "<question>\n", "{input.question}\n", "</question>\n", "<results>\n", "{output.results}\n", "<results>\n", "</data>\n", "\"\"\"\n", "\n", "usefulness = create_classifier(\n", " name=\"usefulness\",\n", " llm=LLM(model=\"gpt-4o\", provider=\"openai\"),\n", " prompt_template=judge_prompt_template,\n", " choices={\"useful\": 1, \"useless\": 0},\n", ")\n", "\n", "await async_evaluate_experiment(experiment=experiment, evaluators=[usefulness])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The LLM judge's scoring closely matches our manual review, demonstrating its effectiveness as an automated evaluation method. This approach is particularly valuable when traditional rule-based scoring functions are difficult to implement.\n", "\n", "The LLM judge also shows an advantage in nuanced understanding - for example, it correctly identifies that 'Anime' and 'Animation' are distinct genres, a subtlety our code-based evaluators missed. This highlights why developing custom LLM judges tailored to your specific task requirements is crucial for accurate evaluation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "We now have a simple text-to-sql pipeline that can be used to generate SQL queries from natural language questions. Since Phoenix has been tracing the entire pipeline, we can now use the Phoenix UI to convert the spans that generated successful queries into examples to use in **Golden Dataset** for regression testing as well." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bringing it all together\n", "\n", "Now that we've seen the experiment improve our outcome, let's put it to a test given the evals we built out earlier." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from openinference.instrumentation import dangerously_using_project\n", "\n", "\n", "@tracer.agent\n", "async def movie_agent_improved(question):\n", " sql_response = await query_db(question)\n", " prompt = synthesis_prompt_template.format(\n", " variables={\"question\": question, \"results\": str(sql_response[\"results\"])}, sdk=\"openai\"\n", " )\n", " answer = await client.chat.completions.create(**prompt)\n", " return answer.choices[0].message.content\n", "\n", "\n", "with dangerously_using_project(project_name=\"movie-agent-improved\"):\n", " for question in questions:\n", " try:\n", " answer = await movie_agent_improved(question)\n", " print(\"Question: \", question)\n", " print(\"Answer: \", answer)\n", " print(\"\\n\")\n", " except Exception as e:\n", " print(e)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from phoenix.client import Client\n", "from phoenix.client.types.spans import SpanQuery\n", "\n", "phoenix_client = Client()\n", "query = SpanQuery().where(\"name == 'movie_agent_improved'\")\n", "\n", "spans_df = phoenix_client.spans.get_spans_dataframe(\n", " project_identifier=\"movie-agent-improved\", query=query\n", ")\n", "\n", "spans_df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from phoenix.evals import async_evaluate_dataframe\n", "\n", "evals_df = await async_evaluate_dataframe(dataframe=spans_df, evaluators=[llm_correctness])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "evals_df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from phoenix.evals.utils import to_annotation_dataframe\n", "\n", "px_client = AsyncClient()\n", "\n", "await px_client.spans.log_span_annotations_dataframe(\n", " dataframe=to_annotation_dataframe(evals_df),\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our improved agent now is able to answer all 6 questions but our `llm_correctness` eval was able to spot that the agent responses are not very good:\n", "\n", "- querying for `Anime` and responding with `Frozen II` misses the mark on anime being a japanese form of animation\n", "- the LLM thinks \"top\" or \"best\" means rating but doesn't take into account the number of votes.\n", "\n", "Our `movie-text-to-sql` prompt still needs more instructions if we want to improve its performance. But we're on the right track and can find more ways to guide the LLM." ] }, { "cell_type": "markdown", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "This tutorial demonstrated the core principles of building **evals that work** for AI applications. Here are the key concepts you should take away:\n", "\n", "1. **Build & Trace**: Instrument your AI application with tracing from day one\n", "2. **Annotate**: Use human judgment to label traces with simple heuristics like correct/incorrect\n", "3. **Create Evaluators**: Build both simple programmatic evals as well as LLM judges\n", "4. **Experiment**: Run systematic experiments to compare different approaches\n", "5. **Iterate**: Use evaluation results to improve prompts, models, or architecture" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "# Bibliography\n", "\n", "<cite id=\"yan2025\">Yan, Z. (2025). An LLM-as-Judge Won't Save The Product—Fixing Your Process Will. *eugeneyan.com*. https://eugeneyan.com/writing/eval-process/</cite>" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.13.3" } }, "nbformat": 4, "nbformat_minor": 2 }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'

If you have feedback or need assistance with the MCP directory API, please join our Discord server