Skip to main content
Glama

@arizeai/phoenix-mcp

Official
by Arize-ai
client-migration.mdc18.3 kB
--- description: globs: alwaysApply: false --- ## Client Migration The `arize-phoenix` package consists of a sub-package named `arize-phoenix-client` which has the full functionality to communicate to the phoenix server. There is a legacy client exported at the top level of the `arize-phoenix` package that is being deprecated. Here are common migration patterns. ## Quick Reference for LLMs **Automatic Migration Rules:** 1. **Replace**: `import phoenix as px` → `from phoenix.client import Client` (or `AsyncClient`) 2. **Replace**: `px.Client()` → `Client()` (variable name: `px_client`) 3. **Replace**: `client.query_spans(...)` → `client.spans.get_spans_dataframe(...)` 4. **Replace**: `client.get_spans_dataframe()` → `client.spans.get_spans_dataframe()` 5. **Replace**: `client.upload_dataset(...)` → `client.datasets.create_dataset(...)` 6. **Replace**: `client.get_dataset(...)` → `client.datasets.get_dataset(...)` 7. **Replace**: `px.Client().log_evaluations(SpanEvaluations(...))` → `px_client.spans.log_span_annotations_dataframe(...)` 8. **Replace**: `px.Client().log_evaluations(DocumentEvaluations(...))` → `px_client.spans.log_document_annotations_dataframe(...)` 9. **Replace**: `from phoenix.experiments import` → `from phoenix.client.experiments import` 10. **Replace**: `from phoenix.trace.dsl import SpanQuery` → `from phoenix.client.types.spans import SpanQuery` 11. **Replace**: `get_spans_dataframe(query="filter_string")` → `get_spans_dataframe(query=SpanQuery().where("filter_string"))` 12. **Parameter**: `project_name=` → `project_identifier=` 13. **Parameter**: `dataset_name=` → `name=` 14. **Parameter**: `eval_name=` → `annotation_name=` ## Pattern Matching for LLMs **Identify Legacy Patterns (RegEx-like matching):** - `import phoenix as px` - `px\.Client\(\)` - `\.query_spans\(` - `\.get_spans_dataframe\(\)` - `\.upload_dataset\(` - `\.get_dataset\(` - `\.log_evaluations\(SpanEvaluations\(` - `\.log_evaluations\(DocumentEvaluations\(` - `from phoenix\.experiments import` - `from phoenix\.trace\.dsl import SpanQuery` - `get_spans_dataframe\(query=".*"\)` - `project_name\s*=` - `dataset_name\s*=` - `eval_name\s*=` **Validation Rules After Migration:** - ✅ Must have: `from phoenix.client import` at top - ✅ Must use: `px_client` as variable name (preferred) - ✅ Must add: `annotator_kind="LLM"` for span evaluations - ✅ Must await: `AsyncClient` method calls with `await` - ❌ No more: `import phoenix as px` (unless used for other purposes) - ❌ No more: `SpanEvaluations` or `DocumentEvaluations` imports ## Decision Tree for LLMs **Client Type Selection:** ``` IF file extension == ".ipynb": USE AsyncClient ADD await before method calls ELIF file extension == ".py": USE Client (synchronous) NO await needed ``` **Evaluations Migration:** ``` IF found "log_evaluations(SpanEvaluations(": MIGRATE to "spans.log_span_annotations_dataframe(" ADD "annotator_kind='LLM'," parameter CHANGE "eval_name=" to "annotation_name=" IF found "log_evaluations(DocumentEvaluations(": MIGRATE to "spans.log_document_annotations_dataframe(" ADD "annotator_kind='LLM'," parameter CHANGE "eval_name=" to "annotation_name=" ``` **Import Consolidation:** ``` IF file contains multiple phoenix.client imports: CONSOLIDATE to single line: "from phoenix.client import Client, AsyncClient" IF file uses both Client and other legacy phoenix features: KEEP both imports: - "import phoenix as px" (for legacy features like launch_app) - "from phoenix.client import Client" (for new client) ``` ## Common Error Patterns to Avoid **❌ Wrong Patterns:** ```python # DON'T mix old and new imports incorrectly import phoenix as px from phoenix.client import Client client = px.Client() # Should use Client() # DON'T forget await with AsyncClient px_client = AsyncClient() px_client.spans.get_spans_dataframe() # Missing await # DON'T use wrong resource path for annotations px_client.annotations.log_span_annotations_dataframe( # Wrong! Should be spans.log_span_annotations_dataframe dataframe=df, annotation_name="test" ) # DON'T forget required annotator_kind px_client.spans.log_span_annotations_dataframe( dataframe=df, annotation_name="test" # Missing annotator_kind="LLM" ) ``` ## Basic Client Import Patterns ### Legacy Pattern ```python # Complete context - typical legacy file header import phoenix as px # Legacy client instantiation client = px.Client() # or px_client = px.Client() ``` ### New Patterns **Synchronous Client (for .py files):** ```python # Complete context - new file header from phoenix.client import Client # New client instantiation px_client = Client() ``` **Asynchronous Client (for .ipynb notebooks):** ```python # Complete context - new notebook cell from phoenix.client import AsyncClient # New async client instantiation px_client = AsyncClient() ``` ### Key Changes - Import path: `import phoenix as px` → `from phoenix.client import Client/AsyncClient` - Client instantiation: `px.Client()` → `Client()` or `AsyncClient()` - Recommended variable name: `px_client` (instead of generic `client`) ## Client Query Patterns ### Legacy Pattern ```python import phoenix as px # Querying spans spans_df = px.Client().query_spans(query, project_name="my-project") # Getting spans dataframe spans_df = px.Client().get_spans_dataframe() ``` ### New Patterns **Synchronous:** ```python from phoenix.client import Client px_client = Client() spans_df = px_client.spans.get_spans_dataframe(query=query, project_identifier="my-project") ``` **Asynchronous:** ```python from phoenix.client import AsyncClient px_client = AsyncClient() spans_df = await px_client.spans.get_spans_dataframe(query=query, project_identifier="my-project") ``` ### Key Changes - `query_spans()` → `spans.get_spans_dataframe()` - `get_spans_dataframe()` → `spans.get_spans_dataframe()` - `project_name` → `project_identifier` - Resource-based API: methods now accessed via `client.spans.*` ## Experiments Migration ### Legacy Pattern ```python from phoenix.experiments import run_experiment, evaluate_experiment ``` ### New Pattern ```python from phoenix.client.experiments import run_experiment, evaluate_experiment ``` ### Key Changes - Import path: `phoenix.experiments` → `phoenix.client.experiments` ## SpanQuery Migration ### Legacy Pattern ```python from phoenix.trace.dsl import SpanQuery # Old way with string query filters spans_df = px.Client().get_spans_dataframe(query="span_kind == 'LLM'") # Or with SpanQuery object (older import) query = SpanQuery().where("span_kind == 'LLM'").select(input="input.value") spans_df = px.Client().query_spans(query) ``` ### New Pattern ```python from phoenix.client import AsyncClient from phoenix.client.types.spans import SpanQuery px_client = AsyncClient() # New way: SpanQuery object only (no string queries) query = SpanQuery().where("span_kind == 'LLM'") spans_df = await px_client.spans.get_spans_dataframe(query=query) ``` ### Key Changes - Import path: `phoenix.trace.dsl` → `phoenix.client.types.spans` - SpanQuery usage remains the same after import - **No more string query filters**: Use SpanQuery object instead of query string parameters ## Datasets Migration ### Legacy Pattern ```python import phoenix as px dataset = px.Client().upload_dataset( dataframe=df, dataset_name="my-dataset", input_keys=["question"], output_keys=["answer"] ) dataset = px.Client().get_dataset(name="my-dataset") ``` ### New Pattern ```python from phoenix.client import Client px_client = Client() dataset = px_client.datasets.create_dataset( dataframe=df, name="my-dataset", input_keys=["question"], output_keys=["answer"] ) dataset = px_client.datasets.get_dataset(dataset="my-dataset") ``` ### Key Changes - `upload_dataset()` → `datasets.create_dataset()` - `get_dataset()` → `datasets.get_dataset()` - `dataset_name` → `name` - `name` parameter → `dataset` parameter (for get_dataset) ## Log Evaluation Migration ### Span Evaluations (MIGRATE) **Legacy Pattern (Complete Example):** ```python # Complete legacy file context import phoenix as px from phoenix.trace import SpanEvaluations import pandas as pd # Some evaluation dataframe relevance_df = pd.DataFrame({"score": [0.8, 0.9], "label": ["good", "excellent"]}) hallucination_df = pd.DataFrame({"score": [0.1, 0.2], "label": ["low", "low"]}) # Legacy single evaluation px.Client().log_evaluations( SpanEvaluations( dataframe=relevance_df, eval_name="Recommendation Relevance", ), ) # Legacy multiple evaluations (single call) px.Client().log_evaluations( SpanEvaluations(eval_name="Hallucination", dataframe=hallucination_df), SpanEvaluations(eval_name="QA Correctness", dataframe=qa_correctness_df), ) ``` **New Pattern (Synchronous - for .py files):** ```python # Complete new file context from phoenix.client import Client import pandas as pd # Same evaluation dataframes relevance_df = pd.DataFrame({"score": [0.8, 0.9], "label": ["good", "excellent"]}) hallucination_df = pd.DataFrame({"score": [0.1, 0.2], "label": ["low", "low"]}) # New single evaluation px_client = Client() px_client.annotations.log_span_annotations_dataframe( dataframe=relevance_df, annotation_name="Recommendation Relevance", annotator_kind="LLM", ) # New multiple evaluations (separate calls) px_client.annotations.log_span_annotations_dataframe( dataframe=hallucination_df, annotation_name="Hallucination", annotator_kind="LLM", ) px_client.annotations.log_span_annotations_dataframe( dataframe=qa_correctness_df, annotation_name="QA Correctness", annotator_kind="LLM", ) ``` **New Pattern (Asynchronous - for .ipynb notebooks):** ```python # Complete new notebook cell context from phoenix.client import AsyncClient import pandas as pd # Same evaluation dataframes relevance_df = pd.DataFrame({"score": [0.8, 0.9], "label": ["good", "excellent"]}) hallucination_df = pd.DataFrame({"score": [0.1, 0.2], "label": ["low", "low"]}) # New async single evaluation px_client = AsyncClient() await px_client.annotations.log_span_annotations_dataframe( dataframe=relevance_df, annotation_name="Recommendation Relevance", annotator_kind="LLM", ) # New async multiple evaluations (separate calls with await) await px_client.annotations.log_span_annotations_dataframe( dataframe=hallucination_df, annotation_name="Hallucination", annotator_kind="LLM", ) await px_client.annotations.log_span_annotations_dataframe( dataframe=qa_correctness_df, annotation_name="QA Correctness", annotator_kind="LLM", ) ``` ### Key Changes - `log_evaluations(SpanEvaluations(...))` → `annotations.log_span_annotations_dataframe(...)` - `eval_name` → `annotation_name` - Added required `annotator_kind` parameter - Multiple evaluations require separate function calls - Import: Remove `SpanEvaluations` import ### Document Evaluations (MIGRATE) **Legacy Pattern (Complete Example):** ```python # Complete legacy file context import phoenix as px from phoenix.trace import DocumentEvaluations import pandas as pd # Document evaluation dataframe with required columns: span_id, document_position document_relevance_df = pd.DataFrame({ "span_id": ["span_1", "span_1", "span_2"], "document_position": [0, 1, 0], "score": [1, 1, 0], "label": ["relevant", "relevant", "irrelevant"], "explanation": ["it's apropos", "it's germane", "it's rubbish"] }) # Legacy single document evaluation px.Client().log_evaluations( DocumentEvaluations( dataframe=document_relevance_df, eval_name="Relevance", ), ) # Legacy multiple evaluations (single call) px.Client().log_evaluations( DocumentEvaluations(eval_name="Relevance", dataframe=document_relevance_df), DocumentEvaluations(eval_name="Accuracy", dataframe=document_accuracy_df), ) ``` **New Pattern (Synchronous - for .py files):** ```python # Complete new file context from phoenix.client import Client import pandas as pd # Same document evaluation dataframe document_relevance_df = pd.DataFrame({ "span_id": ["span_1", "span_1", "span_2"], "document_position": [0, 1, 0], "score": [1, 1, 0], "label": ["relevant", "relevant", "irrelevant"], "explanation": ["it's apropos", "it's germane", "it's rubbish"] }) # New single document evaluation px_client = Client() px_client.spans.log_document_annotations_dataframe( dataframe=document_relevance_df, annotation_name="Relevance", annotator_kind="LLM", ) # New multiple evaluations (separate calls) px_client.spans.log_document_annotations_dataframe( dataframe=document_relevance_df, annotation_name="Relevance", annotator_kind="LLM", ) px_client.spans.log_document_annotations_dataframe( dataframe=document_accuracy_df, annotation_name="Accuracy", annotator_kind="LLM", ) ``` **New Pattern (Asynchronous - for .ipynb notebooks):** ```python # Complete new notebook cell context from phoenix.client import AsyncClient import pandas as pd # Same document evaluation dataframe document_relevance_df = pd.DataFrame({ "span_id": ["span_1", "span_1", "span_2"], "document_position": [0, 1, 0], "score": [1, 1, 0], "label": ["relevant", "relevant", "irrelevant"], "explanation": ["it's apropos", "it's germane", "it's rubbish"] }) # New async single document evaluation px_client = AsyncClient() await px_client.spans.log_document_annotations_dataframe( dataframe=document_relevance_df, annotation_name="Relevance", annotator_kind="LLM", ) # New async multiple evaluations (separate calls with await) await px_client.spans.log_document_annotations_dataframe( dataframe=document_relevance_df, annotation_name="Relevance", annotator_kind="LLM", ) await px_client.spans.log_document_annotations_dataframe( dataframe=document_accuracy_df, annotation_name="Accuracy", annotator_kind="LLM", ) ``` ### Key Changes for Document Evaluations - `log_evaluations(DocumentEvaluations(...))` → `spans.log_document_annotations_dataframe(...)` - `eval_name` → `annotation_name` - Added required `annotator_kind` parameter - Multiple evaluations require separate function calls - DataFrame must include `span_id` and `document_position` columns - Import: Remove `DocumentEvaluations` import ## Complete Parameter Mapping Table for LLMs **Evaluations Parameter Changes:** | Legacy Parameter | New Parameter | Notes | | ---------------- | ----------------- | ------------------------- | | `eval_name` | `annotation_name` | Name of the evaluation | | N/A | `annotator_kind` | Required: typically "LLM" | | `dataframe` | `dataframe` | Same parameter name | **DocumentEvaluations DataFrame Requirements:** | Required Column | Description | | ------------------- | -------------------------------------------- | | `span_id` | ID of the span containing the documents | | `document_position` | 0-based index of document within the span | | `score` | Optional: numeric evaluation score | | `label` | Optional: categorical evaluation label | | `explanation` | Optional: text explanation of the evaluation | **Complete API Transformation Table:** | Legacy Pattern | New Pattern | Key Changes | | ----------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | ------------------------------------------- | | `px.Client().query_spans(project_name=...)` | `px_client.spans.get_spans_dataframe(project_identifier=...)` | Resource path + parameter name | | `px.Client().get_spans_dataframe()` | `px_client.spans.get_spans_dataframe()` | Resource path only | | `px.Client().upload_dataset(dataset_name=...)` | `px_client.datasets.create_dataset(name=...)` | Resource path + parameter name | | `px.Client().get_dataset(name=...)` | `px_client.datasets.get_dataset(dataset=...)` | Resource path + parameter name | | `px.Client().log_evaluations(SpanEvaluations(eval_name=...))` | `px_client.spans.log_span_annotations_dataframe(annotation_name=..., annotator_kind="LLM")` | Resource path + parameters + required field | | `px.Client().log_evaluations(DocumentEvaluations(eval_name=...))` | `px_client.spans.log_document_annotations_dataframe(annotation_name=..., annotator_kind="LLM")` | Resource path + parameters + required field | | `from phoenix.trace.dsl import SpanQuery` | `from phoenix.client.types.spans import SpanQuery` | Import path only | | `get_spans_dataframe(query="filter_string")` | `get_spans_dataframe(query=SpanQuery().where("filter_string"))` | String queries → SpanQuery objects | ## Import Statement Cleanup Remove unused imports after migration: ```python # Remove these after migration: from phoenix.trace import SpanEvaluations # ❌ Remove from phoenix.trace import DocumentEvaluations # ❌ Remove from phoenix.trace.dsl import SpanQuery # ❌ Remove import phoenix as px # ❌ Remove if only used for Client() # Keep these: import phoenix as px # ✅ Keep if used for other functionality (launch_app, etc.) # Replace with new imports: from phoenix.client import Client # ✅ New client import from phoenix.client import AsyncClient # ✅ New async client import from phoenix.client.types.spans import SpanQuery # ✅ New SpanQuery import ```

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'

If you have feedback or need assistance with the MCP directory API, please join our Discord server