Skip to main content
Glama

@arizeai/phoenix-mcp

Official
by Arize-ai
haystack_tracing_tutorial.ipynb4.39 kB
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "<center>\n", " <p style=\"text-align:center\">\n", " <img alt=\"phoenix logo\" src=\"https://storage.googleapis.com/arize-phoenix-assets/assets/phoenix-logo-light.svg\" width=\"200\"/>\n", " <br>\n", " <a href=\"https://arize.com/docs/phoenix/\">Docs</a>\n", " |\n", " <a href=\"https://github.com/Arize-ai/phoenix\">GitHub</a>\n", " |\n", " <a href=\"https://arize-ai.slack.com/join/shared_invite/zt-2w57bhem8-hq24MB6u7yE_ZF_ilOYSBw#/shared-invite/email\">Community</a>\n", " </p>\n", "</center>\n", "<h1 align=\"center\">Tracing and Evaluating a Haystack Application</h1>\n", "\n", "Haystack is an orchestration framework for building production grade LLM applications. It provides:\n", "- A modular and flexible architecture for building LLM-powered applications\n", "- Pre-built components for common NLP tasks like retrieval, generation, and question answering\n", "- Easy integration with various LLM providers and document stores\n", "- Scalability for production environments\n", "\n", " \n", "Phoenix makes your Haystack applications *observable* by visualizing the underlying structure of each call to your Haystack Pipelines and surfacing problematic spans of execution based on latency, token count, or other evaluation metrics.\n", "\n", "ℹ️ This notebook requires an OpenAI API key.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install arize-phoenix arize-phoenix-otel openinference-instrumentation-haystack haystack-ai opentelemetry-sdk opentelemetry-exporter-otlp" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import phoenix as px\n", "\n", "px.launch_app()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from openinference.instrumentation.haystack import HaystackInstrumentor\n", "\n", "from phoenix.otel import register\n", "\n", "tracer_provider = register()\n", "\n", "HaystackInstrumentor().instrument(tracer_provider=tracer_provider)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from haystack import Document, Pipeline\n", "from haystack.components.builders.prompt_builder import PromptBuilder\n", "from haystack.components.generators import OpenAIGenerator\n", "from haystack.components.retrievers.in_memory import InMemoryBM25Retriever\n", "from haystack.document_stores.in_memory import InMemoryDocumentStore\n", "\n", "document_store = InMemoryDocumentStore()\n", "document_store.write_documents(\n", " [\n", " Document(content=\"My name is Jean and I live in Paris.\"),\n", " Document(content=\"My name is Mark and I live in Berlin.\"),\n", " Document(content=\"My name is Giorgio and I live in Rome.\"),\n", " ]\n", ")\n", "\n", "prompt_template = \"\"\"\n", "Given these documents, answer the question.\n", "Documents:\n", "{% for doc in documents %}\n", " {{ doc.content }}\n", "{% endfor %}\n", "Question: {{question}}\n", "Answer:\n", "\"\"\"\n", "\n", "retriever = InMemoryBM25Retriever(document_store=document_store)\n", "prompt_builder = PromptBuilder(template=prompt_template)\n", "llm = OpenAIGenerator()\n", "\n", "rag_pipeline = Pipeline()\n", "rag_pipeline.add_component(\"retriever\", retriever)\n", "rag_pipeline.add_component(\"prompt_builder\", prompt_builder)\n", "rag_pipeline.add_component(\"llm\", llm)\n", "rag_pipeline.connect(\"retriever\", \"prompt_builder.documents\")\n", "rag_pipeline.connect(\"prompt_builder\", \"llm\")\n", "\n", "question = \"Who lives in Paris?\"\n", "results = rag_pipeline.run(\n", " {\n", " \"retriever\": {\"query\": question},\n", " \"prompt_builder\": {\"question\": question},\n", " }\n", ")" ] } ], "metadata": { "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 2 }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'

If you have feedback or need assistance with the MCP directory API, please join our Discord server