Skip to main content
Glama
Arize-ai

@arizeai/phoenix-mcp

Official
by Arize-ai

phoenix-support

Get expert guidance on using Phoenix for tracing AI applications, managing datasets and prompts, and conducting evaluations with OpenInference.

Instructions

Get help with Phoenix and OpenInference.

  • Tracing AI applications via OpenInference and OpenTelemetry

  • Phoenix datasets, experiments, and prompt management

  • Phoenix evals and annotations

Use this tool when you need assistance with Phoenix features, troubleshooting, or best practices.

Expected return: Expert guidance about how to use and integrate Phoenix

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesYour question about Arize Phoenix, OpenInference, or related topics

Implementation Reference

  • Handler for the 'phoenix-support' tool. It calls the 'callRunLLMQuery' helper with the user query and returns the result formatted as MCP text content.
    async ({ query }) => {
      const result = await callRunLLMQuery({ query });
      return {
        content: [
          {
            type: "text",
            text: result,
          },
        ],
      };
    }
  • Input schema for the 'phoenix-support' tool using Zod: a string 'query' with description.
    {
      query: z
        .string()
        .describe(
          "Your question about Arize Phoenix, OpenInference, or related topics"
        ),
    },
  • Registration of the 'phoenix-support' tool on the MCP server within the initializeSupportTools function, including name, description reference, input schema, and inline handler.
      "phoenix-support",
      PHOENIX_SUPPORT_DESCRIPTION,
      {
        query: z
          .string()
          .describe(
            "Your question about Arize Phoenix, OpenInference, or related topics"
          ),
      },
      async ({ query }) => {
        const result = await callRunLLMQuery({ query });
        return {
          content: [
            {
              type: "text",
              text: result,
            },
          ],
        };
      }
    );
  • Core helper function used by the phoenix-support handler. Creates an MCP client to RunLLM server and calls their 'search' tool with the query, extracts and returns text response.
    export async function callRunLLMQuery({
      query,
    }: {
      query: string;
    }): Promise<string> {
      const client = await createRunLLMClient();
    
      // Call the chat tool with the user's question
      const result = await client.callTool({
        name: "search",
        arguments: {
          query: query,
        },
      });
    
      // There's usually only one content item, but we'll handle multiple for safety
      if (result.content && Array.isArray(result.content)) {
        const textContent = result.content
          .filter((item) => item.type === "text")
          .map((item) => item.text)
          .join("\n");
    
        if (textContent) {
          return textContent;
        }
      }
    
      return "No response received from support";
    }
  • Invocation of initializeSupportTools in the main server setup, which registers the phoenix-support tool among others.
    initializeSupportTools({ server });
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. The description mentions it provides 'Expert guidance' and 'assistance', but doesn't disclose important behavioral traits like whether this is a read-only operation, if it requires authentication, rate limits, or what happens when invoked. For a tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It uses bullet points for key areas, provides clear usage guidance, and states the expected return. Every sentence earns its place with no wasted words, and the information is front-loaded effectively.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a help/guidance tool with 1 parameter and no output schema, the description provides adequate context about what it does and when to use it. However, without annotations and with no output schema, it should ideally describe more about the nature of the guidance (e.g., is it interactive, does it return documentation links, etc.). The 'Expected return' section helps but is somewhat vague.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single 'query' parameter, so the schema already documents it well. The description doesn't add any meaningful parameter semantics beyond what the schema provides. The baseline of 3 is appropriate when the schema does the heavy lifting, though the description could have added context about query format or examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get help with Phoenix and OpenInference' followed by specific areas of assistance. It uses the verb 'Get help' with the resources 'Phoenix and OpenInference', making the purpose explicit. However, it doesn't specifically differentiate from sibling tools beyond the general help nature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool: 'Use this tool when you need assistance with Phoenix features, troubleshooting, or best practices.' This gives explicit guidance about the appropriate situations. However, it doesn't mention when NOT to use it or name specific alternative tools from the sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'

If you have feedback or need assistance with the MCP directory API, please join our Discord server