Skip to main content
Glama

@arizeai/phoenix-mcp

Official
by Arize-ai
README.md1.65 kB
# How to: Evals ## [Phoenix Evaluators](running-pre-tested-evals/) * [Hallucinations](running-pre-tested-evals/hallucinations.md) * [Q\&A on Retrieved Data](running-pre-tested-evals/q-and-a-on-retrieved-data.md) * [Retrieval (RAG) Relevance](running-pre-tested-evals/retrieval-rag-relevance.md) * [Summarization](running-pre-tested-evals/summarization-eval.md) * [Code Generation](running-pre-tested-evals/code-generation-eval.md) * [Toxicity](running-pre-tested-evals/toxicity.md) * [AI vs Human](running-pre-tested-evals/ai-vs-human-groundtruth.md) * [Reference (Citation) Eval](running-pre-tested-evals/reference-link-evals.md) * [User Frustration ](running-pre-tested-evals/user-frustration.md) * [SQL Generation Eval](running-pre-tested-evals/sql-generation-eval.md) * [Agent Function Calling Eval](running-pre-tested-evals/tool-calling-eval.md) * [Audio Emotion Detection](running-pre-tested-evals/audio-emotion-detection.md) ## [Bring Your Own Evaluator](bring-your-own-evaluator.md) * [Categorical evaluator](bring-your-own-evaluator.md#categorical-llm_classify) (llm\_classify) * [Numeric evaluator](bring-your-own-evaluator.md#score-numeric-eval-llm_generate) (llm\_generate) ## [Online Evals](./#online-evals) Run evaluations via a job to visualize in the UI as traces stream in. ## [Evaluating Phoenix Traces](../../tracing/how-to-tracing/feedback-and-annotations/evaluating-phoenix-traces.md) Evaluate traces captured in Phoenix and export results to the Phoenix UI.  ## [Multimodal Evals](multimodal-evals.md) Evaluate tasks with multiple inputs/outputs (ex: text, audio, image) using versatile evaluation tasks.\

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'

If you have feedback or need assistance with the MCP directory API, please join our Discord server