custom-annotations-tool-for-eval-driven-development.ipynb•23.6 kB
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "0cHj55AVXtqD"
},
"source": [
"<center>\n",
" <p style=\"text-align:center\">\n",
" <img alt=\"phoenix logo\" src=\"https://storage.googleapis.com/arize-phoenix-assets/assets/phoenix-logo-light.svg\" width=\"200\"/>\n",
" <br>\n",
" <a href=\"https://docs.arize.com/phoenix/\">Docs</a>\n",
" |\n",
" <a href=\"https://github.com/Arize-ai/phoenix\">GitHub</a>\n",
" |\n",
" <a href=\"https://join.slack.com/t/arize-ai/shared_invite/zt-1px8dcmlf-fmThhDFD_V_48oU7ALan4Q\">Community</a>\n",
" </p>\n",
"</center>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mInKCpluV-Ca"
},
"source": [
"# **Building an Eval-Driven Development Pipeline with a Custom Annotations UI**\n",
"\n",
"In this tutorial, we will explore how to leverage a custom annotation UI for Phoenix using [Lovable](https://lovable.dev) to build experiments and evaluate your application.\n",
"\n",
"The purpose of a custom annotations UI is to make it easy for anyone to provide structured human feedback on traces, capturing essential details directly in Phoenix. Annotations are vital for collecting feedback during human review, enabling iterative improvement of your LLM applications.\n",
"\n",
"By establishing this feedback loop and an evaluation pipeline, you can effectively monitor and enhance your system’s performance.\n",
"\n",
"You will need a [Phoenix Cloud account](https://app.arize.com/auth/phoenix/signup) and an OpenAI API key to follow along."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AhTKQwqig_Rj"
},
"source": [
"## Install Dependencies & Setup Keys"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install -qqq arize-phoenix arize-phoenix-otel openinference-instrumentation-openai"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install -qq openai nest_asyncio"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"import nest_asyncio\n",
"\n",
"nest_asyncio.apply()\n",
"\n",
"if not (phoenix_endpoint := os.getenv(\"PHOENIX_COLLECTOR_ENDPOINT\")):\n",
" phoenix_endpoint = getpass(\"🔑 Enter your Phoenix Collector Endpoint\")\n",
"os.environ[\"PHOENIX_COLLECTOR_ENDPOINT\"] = phoenix_endpoint\n",
"\n",
"\n",
"if not (phoenix_api_key := os.getenv(\"PHOENIX_API_KEY\")):\n",
" phoenix_api_key = getpass(\"🔑 Enter your Phoenix API key\")\n",
"os.environ[\"PHOENIX_API_KEY\"] = phoenix_api_key\n",
"\n",
"if not (openai_api_key := os.getenv(\"OPENAI_API_KEY\")):\n",
" openai_api_key = getpass(\"🔑 Enter your OpenAI API key: \")\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = openai_api_key"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IumKWeM8yJHE"
},
"source": [
"# Configure Tracing"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from phoenix.otel import register\n",
"\n",
"# configure the Phoenix tracer\n",
"tracer_provider = register(\n",
" project_name=\"my-annotations-app\", # Default is 'default'\n",
" auto_instrument=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OVAcix59DZ49"
},
"source": [
"# Generate traces to annotate"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "_qOfMNMoq7dD"
},
"source": [
"We will generate some agent traces and send them to Phoenix. We will then annotate these traces to add labels, scores, or explanations directly onto specific spans. Annotations allow us to enrich our traces with structured feedback, making it easier to filter, track, and improve LLM outputs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"questions = [\n",
" \"What is the capital of France?\",\n",
" \"Who wrote 'Pride and Prejudice'?\",\n",
" \"What is the boiling point of water in Celsius?\",\n",
" \"What is the largest planet in our solar system?\",\n",
" \"Who developed the theory of relativity?\",\n",
" \"What is the chemical symbol for gold?\",\n",
" \"In which year did the Apollo 11 mission land on the moon?\",\n",
" \"What language has the most native speakers worldwide?\",\n",
" \"Which continent has the most countries?\",\n",
" \"What is the square root of 144?\",\n",
" \"What is the largest country in the world by land area?\",\n",
" \"Why is the sky blue?\",\n",
" \"Who painted the Mona Lisa?\",\n",
" \"What is the smallest prime number?\",\n",
" \"What gas do plants absorb from the atmosphere?\",\n",
" \"Who was the first President of the United States?\",\n",
" \"What is the currency of Japan?\",\n",
" \"How many continents are there on Earth?\",\n",
" \"What is the tallest mountain in the world?\",\n",
" \"Who is the author of '1984'?\",\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "x70WtZ0CAv5z"
},
"source": [
"We deliberately generate some bad or nonsensical traces in the system prompt to demonstrate annotating and experimenting with different types of results.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from openai import OpenAI\n",
"\n",
"openai_client = OpenAI()\n",
"\n",
"# System prompt\n",
"system_prompt = \"\"\"\n",
"You are a question-answering assistant. For each user question, randomly choose an option: NONSENSE or RHYME. If you choose RHYME, answer correctly in the form of a rhyme.\n",
"\n",
"If it NONSENSE, do not answer the question at all, and instead respond with nonsense words and random numbers that do not rhyme, ignoring the user’s question completely.\n",
"When responding with NONSENSE, include at least five nonsense words and at least five random numbers between 0 and 9999 in your response.\n",
"\n",
"Do not explain your choice.\n",
"\"\"\"\n",
"\n",
"# Run through the dataset and collect spans\n",
"for question in questions:\n",
" response = openai_client.chat.completions.create(\n",
" model=\"gpt-4o\",\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": question},\n",
" ],\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tf4DJji3DcA0"
},
"source": [
"# Launch Custom Annotation UI"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "R49hfrtf-PZK"
},
"source": [
"Visit our implementation here: https://phoenix-trace-annotator.lovable.app/\n",
"\n",
"*Note: This annotation UI was built for Phoenix Cloud demo purposes and is not optimized for high-volume trace workflows.*\n",
"\n",
"How to annotate your traces in Lovable:\n",
"\n",
"1. Enter your Phoenix Cloud endpoint, API key, and project name. Optionally, also include an identifier to tie annotations to a specific user.\n",
"2. Click Refresh Traces.\n",
"3. Select the traces you want to annotate and click Send to Phoenix.\n",
"4. See your annotations appear instantly in Phoenix.\n",
"\n",
"Run the cell below to see it in action:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import HTML\n",
"\n",
"video_url = (\n",
" \"https://storage.googleapis.com/arize-phoenix-assets/assets/videos/custom-annotations-UI.mp4\"\n",
")\n",
"\n",
"HTML(f\"\"\"\n",
"<iframe width=\"1200\" height=\"700\" src=\"{video_url}\"\n",
"frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\"\n",
"allowfullscreen></iframe>\n",
"\"\"\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "__OrAZ7m9VSV"
},
"source": [
"#Create a dataset from annotated spans"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1WDOyIF99tYK"
},
"source": [
"After you have annotated some spans, save them as a dataset in Phoenix."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"\n",
"import phoenix as px\n",
"from phoenix.client import Client\n",
"from phoenix.client.types import spans\n",
"\n",
"client = Client()\n",
"# replace \"correctness\" if you chose to annotate on different criteria\n",
"query = spans.SpanQuery().where(\"annotations['correctness']\")\n",
"spans_df = client.spans.get_spans_dataframe(query=query, project_identifier=\"my-annotations-app\")\n",
"dataset = px.Client().upload_dataset(\n",
" dataframe=spans_df,\n",
" dataset_name=\"annotated-rhymes\",\n",
" input_keys=[\"attributes.input.value\"],\n",
" output_keys=[\"attributes.llm.output_messages\"],\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Yzmhgj5m8mLG"
},
"source": [
"# Build an Eval based on annotations"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uQtIS9K-E5ES"
},
"source": [
"Next, you will construct an LLM-as-a-Judge template to evaluate your experiments. This evaluator will mark nonsensical outputs as incorrect. As you experiment (by improving your system prompt), you’ll see evaluation results improve. Once your annotated trace dataset shows consistent improvement, you can confidently apply these changes to your production system."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"RHYME_PROMPT_TEMPLATE = \"\"\"\n",
"Examine the assistant’s responses in the conversation and determine whether the assistant used rhyme in any of its responses.\n",
"\n",
"Rhyme means that the assistant’s response contains clear end rhymes within or across lines. This should be applicable to the entire response.\n",
"There should be no irrelevant phrases or numbers in the response.\n",
"Determine whether the rhyme is high quality or forced in addition to checking for the presence of rhyme.\n",
"This is the criteria for determining a well-written rhyme.\n",
"\n",
"If none of the assistant's responses contain rhyme, output that the assistant did not rhyme.\n",
"\n",
"[BEGIN DATA]\n",
" ************\n",
" [Question]: {question}\n",
" ************\n",
" [Response]: {answer}\n",
" [END DATA]\n",
"\n",
"\n",
"Your response must be a single word, either \"correct\" or \"incorrect\", and should not contain any text or characters aside from that word.\n",
"\n",
"\"correct\" means the response contained a well written rhyme.\n",
"\n",
"\"incorrect\" means the response did not contain a rhyme.\n",
"\"\"\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zQgjBwoeR1xE"
},
"source": [
"# Experimentation Example #1 ~ Changing Models"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"from phoenix.evals import OpenAIModel, llm_classify\n",
"from phoenix.experiments import run_experiment\n",
"from phoenix.experiments.types import Example\n",
"\n",
"system_prompt = \"\"\"\n",
"You are a question-answering assistant. For each user question, randomly choose an option: NONSENSE or RHYME. You can favor NONSENSE. If you choose RHYME, answer correctly in the form of a rhyme.\n",
"\n",
"If it NONSENSE, do not answer the question at all, and instead respond with nonsense words and random numbers that do not rhyme, ignoring the user’s question completely.\n",
"When responding with NONSENSE, include at least five nonsense words and at least five random numbers between 0 and 9999 in your response.\n",
"\n",
"Do not explain your choice or indicate if you are using RHYME or NONSENSE in the response.\n",
"\"\"\"\n",
"\n",
"\n",
"def updated_task(example: Example) -> str:\n",
" raw_input_value = example.input[\"attributes.input.value\"]\n",
" data = json.loads(raw_input_value)\n",
" question = data[\"messages\"][1][\"content\"]\n",
" response = openai_client.chat.completions.create(\n",
" model=\"gpt-4.1\", # swap model. other examples: gpt-4o-mini, gpt-4.1-mini, gpt-4, etc.\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": question},\n",
" ],\n",
" )\n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def evaluate_response(input, output):\n",
" raw_input_value = input[\"attributes.input.value\"]\n",
" data = json.loads(raw_input_value)\n",
" question = data[\"messages\"][1][\"content\"]\n",
" response_classifications = llm_classify(\n",
" dataframe=pd.DataFrame([{\"question\": question, \"answer\": output}]),\n",
" template=RHYME_PROMPT_TEMPLATE,\n",
" model=OpenAIModel(model=\"gpt-4-turbo\"),\n",
" rails=[\"correct\", \"incorrect\"],\n",
" provide_explanation=True,\n",
" )\n",
" score = response_classifications.apply(lambda x: 0 if x[\"label\"] == \"incorrect\" else 1, axis=1)\n",
" return score"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gc3svWyJFXIT"
},
"source": [
"After running the cell below, you will see your experiment results in the “Experiments” tab. Don’t expect high evaluator performance yet, as we haven’t fixed the root issue—the system prompt. The purpose here is to demonstrate an example of running an experiment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment = run_experiment(\n",
" dataset,\n",
" task=updated_task,\n",
" evaluators=[evaluate_response],\n",
" experiment_name=\"updated model\",\n",
" experiment_description=\"updated model gpt-4.1\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "hWNw3gJoSo5S"
},
"source": [
"# Experiment #2 ~ Improving the System Prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"\"\"\n",
"You are a question-answering assistant. For each user question, answer correctly in the form of a rhyme.\n",
"\"\"\"\n",
"\n",
"\n",
"def updated_task(example: Example) -> str:\n",
" raw_input_value = example.input[\"attributes.input.value\"]\n",
" data = json.loads(raw_input_value)\n",
" question = data[\"messages\"][1][\"content\"]\n",
" response = openai_client.chat.completions.create(\n",
" model=\"gpt-4o\",\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": question},\n",
" ],\n",
" )\n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def evaluate_response(input, output):\n",
" raw_input_value = input[\"attributes.input.value\"]\n",
" data = json.loads(raw_input_value)\n",
" question = data[\"messages\"][1][\"content\"]\n",
" response_classifications = llm_classify(\n",
" dataframe=pd.DataFrame([{\"question\": question, \"answer\": output}]),\n",
" template=RHYME_PROMPT_TEMPLATE,\n",
" model=OpenAIModel(model=\"gpt-4.1\"),\n",
" rails=[\"correct\", \"incorrect\"],\n",
" provide_explanation=True,\n",
" )\n",
" score = response_classifications.apply(lambda x: 0 if x[\"label\"] == \"incorrect\" else 1, axis=1)\n",
" return score"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3vV45EUkF0zg"
},
"source": [
"Here, we expect to see improvements in our experiment. The evaluator should flag significantly fewer nonsensical answers as you have refined your system prompt."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment = run_experiment(\n",
" dataset,\n",
" task=updated_task,\n",
" evaluators=[evaluate_response],\n",
" experiment_name=\"updated system prompt\",\n",
" experiment_description=\"updated system prompt\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e-MpSyVeaqoY"
},
"source": [
"#Apply updates to your application"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wOZItvoLGaB3"
},
"source": [
"Now that we’ve completed an experimentation cycle and confirmed our changes on the annotated traces, we can update the application and test the results on the broader dataset. This helps ensure that improvements made during experimentation translate effectively to real-world usage and that your system performs reliably at scale."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"questions = [\n",
" \"What is the capital of France?\",\n",
" \"Who wrote 'Pride and Prejudice'?\",\n",
" \"What is the boiling point of water in Celsius?\",\n",
" \"What is the largest planet in our solar system?\",\n",
" \"Who developed the theory of relativity?\",\n",
" \"What is the chemical symbol for gold?\",\n",
" \"In which year did the Apollo 11 mission land on the moon?\",\n",
" \"What language has the most native speakers worldwide?\",\n",
" \"Which continent has the most countries?\",\n",
" \"What is the square root of 144?\",\n",
" \"What is the largest country in the world by land area?\",\n",
" \"Why is the sky blue?\",\n",
" \"Who painted the Mona Lisa?\",\n",
" \"What is the smallest prime number?\",\n",
" \"What gas do plants absorb from the atmosphere?\",\n",
" \"Who was the first President of the United States?\",\n",
" \"What is the currency of Japan?\",\n",
" \"How many continents are there on Earth?\",\n",
" \"What is the tallest mountain in the world?\",\n",
" \"Who is the author of '1984'?\",\n",
"]\n",
"questions_df = pd.DataFrame(questions, columns=[\"Questions\"])\n",
"dataset = px.Client().upload_dataset(\n",
" dataframe=questions_df,\n",
" input_keys=[\"Questions\"],\n",
" dataset_name=\"all-trivia-questions\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"\"\"\n",
"You are a question-answering assistant. For each user question, answer correctly in the form of a rhyme.\n",
"\"\"\"\n",
"\n",
"\n",
"# Run through the dataset and collect spans\n",
"def complete_task(question) -> str:\n",
" question_str = question[\"Questions\"]\n",
" response = openai_client.chat.completions.create(\n",
" model=\"gpt-4o\",\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": question_str},\n",
" ],\n",
" )\n",
" return response.choices[0].message.content\n",
"\n",
"\n",
"def evaluate_all_responses(input, output):\n",
" response_classifications = llm_classify(\n",
" dataframe=pd.DataFrame([{\"question\": input[\"Questions\"], \"answer\": output}]),\n",
" template=RHYME_PROMPT_TEMPLATE,\n",
" model=OpenAIModel(model=\"gpt-4o\"),\n",
" rails=[\"correct\", \"incorrect\"],\n",
" provide_explanation=True,\n",
" )\n",
" score = response_classifications.apply(lambda x: 0 if x[\"label\"] == \"incorrect\" else 1, axis=1)\n",
" return score\n",
"\n",
"\n",
"experiment = run_experiment(\n",
" dataset=dataset,\n",
" task=complete_task,\n",
" evaluators=[evaluate_all_responses],\n",
" experiment_name=\"modified-system-prompt-full-dataset\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XbuUrxrI-UR-"
},
"source": [
"## Tips for building your custom annotation UI"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YCCEQtyXTa_i"
},
"source": [
"Here is a sample prompt you can feed into [Lovable](https://lovable.dev) to start building your custom LLM trace annotation UI. Feel free to adjust it to your needs. Note that you will need to implement functionality to fetch spans and send annotations to Phoenix. We’ve also included a brief explanation of how we approached this in our own implementation.\n",
"\n",
"**Prompt for Lovable:**\n",
"\n",
"Build a platform for annotating LLM spans and traces:\n",
"\n",
"1. Connect to Phoenix Cloud by collecting endpoint, API Key, and project name from the user\n",
"2. Load traces and spans from Phoenix (via [REST API](https://arize.com/docs/phoenix/sdk-api-reference/spans#get-v1-projects-project_identifier-spans) or [Python SDK](https://arize.com/docs/phoenix/tracing/how-to-tracing/feedback-and-annotations/evaluating-phoenix-traces#download-trace-dataset-from-phoenix)).\n",
"3. Display spans grouped by trace_id, with clear visual separation.\n",
"4. Allow annotators to assign a label, score, and explanation to each span or entire trace.\n",
"5. Support sending annotations back to Phoenix and reloading to see updates.\n",
"6. Use a clean, modern design\n",
"\n",
"**Details on how we built our Annotation UI:**\n",
"\n",
"✅ Frontend (Lovable):\n",
"\n",
"- Built in Lovable for easy UI generation.\n",
"- Allows loading LLM traces, displaying spans grouped by trace_id, and annotating spans with label, score, explanation.\n",
"\n",
"✅ Backend (Render, FastAPI):\n",
"\n",
"- Hosted on Render using FastAPI.\n",
"- Adds CORS for your Lovable frontend to communicate securely.\n",
"- Uses two key endpoints:\n",
" 1. GET /v1/projects/{project_identifier}/spans\n",
" 2. POST /v1/span_annotations"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}