Skip to main content
Glama

@arizeai/phoenix-mcp

Official
by Arize-ai
SUMMARY.md3.46 kB
# Table of contents * [Featured Tutorials](README.md) * [Agent Cookbooks](agent-cookbooks.md) * [Agent Demos](agent-demos.md) * [Agent Workflow Patterns](agent-workflow-patterns/README.md) * [AutoGen](agent-workflow-patterns/autogen.md) * [CrewAI](agent-workflow-patterns/crewai.md) * [Google GenAI SDK (Manual Orchestration)](agent-workflow-patterns/google-genai-sdk-manual-orchestration.md) * [OpenAI Agents](agent-workflow-patterns/openai-agents.md) * [LangGraph](agent-workflow-patterns/langgraph.md) * [Smolagents](agent-workflow-patterns/smolagents.md) ## Tracing * [Agentic RAG Tracing](tracing/agentic-rag-tracing.md) * [Generating Synthetic Datasets for LLM Evaluators & Agents](tracing/generating-synthetic-datasets-for-llm-evaluators-and-agents.md) * [Structured Data Extraction](tracing/structured-data-extraction.md) * [Product Recommendation Agent: Google Agent Engine & LangGraph](tracing/product-recommendation-agent-google-agent-engine-and-langgraph.md) * [More Cookbooks](tracing/cookbooks.md) ## Human-in-the-loop Workflows (Annotations) * [Using Human Annotations for Eval-Driven Development](human-in-the-loop-workflows-annotations/using-human-annotations-for-eval-driven-development.md) * [Aligning LLM Evals with Human Annotations (TypeScript)](human-in-the-loop-workflows-annotations/aligning-llm-evals-with-human-annotations-typescript.md) * [Creating a Custom LLM Evaluator with a Benchmark Dataset](human-in-the-loop-workflows-annotations/creating-a-custom-llm-evaluator-with-a-benchmark-dataset.md) ## Prompt Engineering * [Prompt Learning - Optimizing Prompts for Classification](prompt-engineering/prompt-learning-optimizing-prompts-for-classification.md) * [Few Shot Prompting](prompt-engineering/few-shot-prompting.md) * [ReAct Prompting](prompt-engineering/react-prompting.md) * [Chain-of-Thought Prompting](prompt-engineering/chain-of-thought-prompting.md) * [Prompt Optimization](prompt-engineering/prompt-optimization.md) * [LLM as a Judge Prompt Optimization](prompt-engineering/llm-as-a-judge-prompt-optimization.md) ## Evaluation * [OpenAI Agents SDK Cookbook](evaluation/openai-agents-sdk-cookbook.md) * [Evaluate a Talk-to-your-Data Agent](evaluation/evaluate-an-agent.md) * [Evaluate RAG](evaluation/evaluate-rag.md) * [Code Readability Evaluation](evaluation/code-readability-evaluation.md) * [Relevance Classification Evaluation](evaluation/relevance-classification-evaluation.md) * [Using Ragas to Evaluate a Math Problem-Solving Agent](evaluation/using-ragas-to-evaluate-a-math-problem-solving-agent.md) * [More Cookbooks](evaluation/cookbooks.md) ## Datasets & Experiments * [Experiment with a Customer Support Agent](datasets-and-experiments/experiment-with-a-customer-support-agent.md) * [Model Comparison for an Email Text Extraction Service](datasets-and-experiments/model-comparison-for-an-email-text-extraction-service.md) * [Comparing LlamaIndex Query Engines with a Pairwise Evaluator](datasets-and-experiments/comparing-llamaindex-query-engines-with-a-pairwise-evaluator.md) * [Prompt Template Iteration for a Summarization Service](datasets-and-experiments/summarization.md) * [Text2SQL Experiments](datasets-and-experiments/text2sql.md) * [More Cookbooks](datasets-and-experiments/cookbooks.md) ## Retrieval & Inferences * [Embeddings Analysis](retrieval-and-inferences/embeddings-analysis.md) * [More Cookbooks](retrieval-and-inferences/cookbooks.md) ## Prompt Learning

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'

If you have feedback or need assistance with the MCP directory API, please join our Discord server