phoenix_prompts_vercel_ai_sdk.ipynb•9.8 kB
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<center>\n",
" <p style=\"text-align:center\">\n",
" <img alt=\"phoenix logo\" src=\"https://raw.githubusercontent.com/Arize-ai/phoenix-assets/9e6101d95936f4bd4d390efc9ce646dc6937fb2d/images/socal/github-large-banner-phoenix.jpg\" width=\"1000\"/>\n",
" <br>\n",
" <br>\n",
" <a href=\"https://arize.com/docs/phoenix/\">Docs</a>\n",
" |\n",
" <a href=\"https://github.com/Arize-ai/phoenix\">GitHub</a>\n",
" |\n",
" <a href=\"https://arize-ai.slack.com/join/shared_invite/zt-2w57bhem8-hq24MB6u7yE_ZF_ilOYSBw#/shared-invite/email\">Community</a>\n",
" </p>\n",
"</center>\n",
"<h1 align=\"center\">Phoenix Prompts + Vercel AI SDK Tutorial</h1>\n",
"\n",
"Let's see how to get started with using the Phoenix JavaScript SDK to pull prompts from an\n",
"instance of Phoenix, and then invoke those prompts using Vercel AI SDK with multiple LLM providers, all in a Deno Jupyter notebook.\n",
"\n",
"> Note: that this example requires an OpenAI API key, a Google Generative AI API key, and assumes you are running the Phoenix server on localhost:6006."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import * as PhoenixClient from \"npm:@arizeai/phoenix-client\"\n",
"import ai from \"npm:ai\"\n",
"import { createOpenAI } from 'npm:@ai-sdk/openai';\n",
"import { createGoogleGenerativeAI } from 'npm:@ai-sdk/google';"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's get started by creating a Phoenix client. This is technically optional, as the Phoenix Prompt helper methods will create a client if not provided.\n",
"\n",
"However, we will create a client here to show how to configure the client with your Phoenix instance."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"const px = PhoenixClient.createClient({\n",
" options: {\n",
" baseUrl: \"http://localhost:6006\",\n",
" // Uncomment this if you are using a Phoenix instance that requires an API key\n",
" // headers: {\n",
" // Authorization: \"bearer xxxxxx\",\n",
" // }\n",
" }\n",
"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, let's setup the OpenAI provider via the Vercel AI SDK. This will prompt you for an OpenAI API key."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"const apiKey = prompt(\"Enter your OpenAI API key:\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"const openai = createOpenAI({ apiKey: apiKey });"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will also setup the Google provider, which will prompt you for a Google Generative AI API key."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"const googleApiKey = prompt(\"Enter your Google Generative AI API key:\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"const google = createGoogleGenerativeAI({ apiKey: googleApiKey });"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's build a prompt in Phoenix!\n",
"\n",
"We will create a prompt called `question-searcher` that asks a question and then returns the answer.\n",
"\n",
"Let's set up the prompt to use `OpenAI` as the LLM provider, and lets also add a \"variable\" to one of the prompt messages, named `query`.\n",
"\n",
"This will allow us to pass in a question to the prompt when we invoke it.\n",
"\n",
"Finally, lets add a tool called `search` to the prompt. This tool will allow the LLM to request a search for a query it receives.\n",
"\n",
"```json\n",
"{\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"search\",\n",
" \"description\": \"Query the internet for the answer to a question.\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"query\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"Search term\"\n",
" }\n",
" },\n",
" \"required\": [\"query\"]\n",
" }\n",
" }\n",
"}\n",
"```\n",
"\n",
"Note that the tool definition is in the format that OpenAI expects, but not Google. We will come back to this later!\n",
"\n",
"[Follow the steps outlined in the documentation](https://arize.com/docs/phoenix/prompt-engineering/how-to-prompts/create-a-prompt) to build your prompt as described above, in the Phoenix UI.\n",
"\n",
"Once you have a prompt, you can pull it into the notebook using the Phoenix client."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import * as Prompts from \"npm:@arizeai/phoenix-client/prompts\"\n",
"\n",
"const questionSearcherPrompt = await Prompts.getPrompt({ client: px, prompt: { name: \"question-searcher\" } })\n",
"\n",
"await Deno.jupyter.md`\n",
" ### question-searcher prompt\n",
"\n",
" \\`\\`\\`json\n",
" ${JSON.stringify(questionSearcherPrompt, null, 2)}\n",
" \\`\\`\\`\n",
" `"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Great! Now we have the contents of the `question-searcher` prompt. It's currently in a format that Phoenix understands, so we need to convert it to a format that our SDKs can understand.\n",
"\n",
"Luckily, `@arizeai/phoenix-client` comes with a helper function, `toSDK`, to convert the prompt to LLM SDK compatible formats, and inject variables into the prompt at the same time.\n",
"\n",
"We will leverage this helper function to convert the prompt to the Vercel AI Core SDK format, and inject the `query` variable into the prompt."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"const aiPrompt = Prompts.toSDK({ \n",
" sdk: \"ai\", \n",
" prompt: questionSearcherPrompt, \n",
" variables: { query: \"When does the next Arize AI Phoenix release come out?\" } \n",
"})\n",
"\n",
"await Deno.jupyter.md`\n",
" ### Vercel AI Core Prompt\n",
"\n",
" \\`\\`\\`json\n",
" ${JSON.stringify(aiPrompt, null, 2)}\n",
" \\`\\`\\`\n",
"`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now have a set of \"invocation parameters\", fully typed and in the format that Vercel AI SDK expects.\n",
"\n",
"Notably, this includes tool definitions as well. The format of the tool definitions is different for OpenAI and Anthropic, and the `@arizeai/phoenix-client` library handles this for you via the `toSDK` function.\n",
"\n",
"Let's invoke OpenAI with our prompt and then Google Generative AI with our prompt, and compare the results!\n",
"\n",
"First, let's invoke OpenAI with our prompt."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import { generateText } from \"npm:ai\"\n",
"\n",
"// lets just hardcode the model for now\n",
"// you can use the model saved in your Phoenix Prompt if you want, but since we are using Vercel AI SDK,\n",
"// we will just redeclare the model so that we can take advantage of Vercel AI SDK features\n",
"const openaiModel = openai(\"gpt-4o-mini\")\n",
"\n",
"const response = await generateText({\n",
" model: openaiModel,\n",
" ...aiPrompt,\n",
"})\n",
"\n",
"await Deno.jupyter.md`\n",
" ### OpenAI Response\n",
"\n",
" ${response.text || `\\`\\`\\`json\\n${JSON.stringify(response.steps[0].toolCalls, null, 2)}\\`\\`\\``}\n",
"`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, let's invoke Google Generative AI with the same prompt."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"const response = await generateText({\n",
" model: google(\"gemini-2.0-flash-001\"),\n",
" ...aiPrompt,\n",
"})\n",
"\n",
"await Deno.jupyter.md`\n",
" ### Google Generative AI Response\n",
"\n",
" ${response.text || `\\`\\`\\`json\\n${JSON.stringify(response.steps[0].toolCalls, null, 2)}\\`\\`\\``}\n",
"`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Great! We've successfully invoked OpenAI, and Google Generative AI with our prompt and got a response.\n",
"\n",
"You can see that this workflow allows you to build a prompt in Phoenix,\n",
"and then invoke it with multiple LLM providers, all without having to modify the prompt or the invocation parameters yourself.\n",
"\n",
"Additionally, you have the flexibility to use the prompt with all of the features and capabilities you would expect from Vercel AI SDK.\n",
"\n",
"Here are some additional features and capabilities you can explore:\n",
"\n",
"- Prompt Versioning / Tagging\n",
"- Prompt Conversion to other LLM providers"
]
}
],
"metadata": {
"language_info": {
"name": "typescript"
}
},
"nbformat": 4,
"nbformat_minor": 2
}