We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'
If you have feedback or need assistance with the MCP directory API, please join our Discord server
# Prompt Playground
<figure><img src="https://storage.googleapis.com/arize-phoenix-assets/assets/images/playground_overview.gif" alt=""><figcaption></figcaption></figure>
Phoenix's Prompt Playground makes the process of iterating and testing prompts quick and easy. Phoenix's playground supports [various AI providers](../how-to-prompts/configure-ai-providers.md) (OpenAI, Anthropic, Gemini, Azure) as well as custom model endpoints, making it the ideal prompt IDE for you to build experiment and evaluate prompts and models for your task.
* **Speed**: Rapidly test variations in the [prompt](https://app.gitbook.com/s/fqGNxHHFrgwnCxgUBNsJ/prompt-engineering/prompts-concepts#prompt), model, invocation parameters, [tools](https://app.gitbook.com/s/fqGNxHHFrgwnCxgUBNsJ/prompt-engineering/prompts-concepts#tools), and output format.
* **Reproducibility**: All runs of the playground are [recorded as traces and experiments](../how-to-prompts/using-the-playground.md#playground-traces), unlocking annotations and evaluation.
* **Datasets:** Use [dataset examples](../how-to-prompts/test-a-prompt.md) as a fixture to run a prompt variant through its paces and to evaluate it systematically.
* **Prompt** **Management**: [Load, edit, and save prompts](prompt-management.md) directly within the playground.
To learn more on how to use the playground, see [using-the-playground.md](../how-to-prompts/using-the-playground.md "mention")