Skip to main content
Glama
Arize-ai

@arizeai/phoenix-mcp

Official
by Arize-ai

get-prompt-version

Retrieve a specific prompt version by its ID to access its template, model configuration, and invocation parameters for consistent AI interactions.

Instructions

Get a specific version of a prompt using its version ID. Returns the prompt version with its template, model configuration, and invocation parameters.

Example usage: Get a specific prompt version with ID 'promptversionid1234'

Expected return: Prompt version object with template and configuration. Example: { "description": "Initial version", "model_provider": "OPENAI", "model_name": "gpt-3.5-turbo", "template": { "type": "chat", "messages": [ { "role": "system", "content": "You are an expert summarizer. Create clear, concise bullet points highlighting the key information." }, { "role": "user", "content": "Please summarize the following {{topic}} article:

{{article}}" } ] }, "template_type": "CHAT", "template_format": "MUSTACHE", "invocation_parameters": { "type": "openai", "openai": {} }, "id": "promptversionid1234" }

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
prompt_version_idYes

Implementation Reference

  • Registration of the 'get-prompt-version' tool on the MCP server, including the handler function that retrieves a specific prompt version by ID using the PhoenixClient API and returns the JSON response.
    server.tool(
      "get-prompt-version",
      GET_PROMPT_VERSION_DESCRIPTION,
      getPromptVersionSchema.shape,
      async ({ prompt_version_id }) => {
        const response = await client.GET(
          "/v1/prompt_versions/{prompt_version_id}",
          {
            params: {
              path: {
                prompt_version_id,
              },
            },
          }
        );
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify(response.data, null, 2),
            },
          ],
        };
      }
    );
  • The handler function for 'get-prompt-version' tool: takes prompt_version_id, calls PhoenixClient GET /v1/prompt_versions/{id}, and returns the data as text content.
    async ({ prompt_version_id }) => {
      const response = await client.GET(
        "/v1/prompt_versions/{prompt_version_id}",
        {
          params: {
            path: {
              prompt_version_id,
            },
          },
        }
      );
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify(response.data, null, 2),
          },
        ],
      };
    }
  • Zod schema defining the input for 'get-prompt-version': requires a 'prompt_version_id' string.
    export const getPromptVersionSchema = z.object({
      prompt_version_id: z.string(),
    });
  • Invocation of initializePromptTools which registers all prompt tools including 'get-prompt-version' on the MCP server.
    initializePromptTools({ client, server });
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the read-only nature ('Get'), specifies the return content ('Returns the prompt version with its template, model configuration, and invocation parameters'), and provides a detailed example of the expected return structure. This covers key behavioral aspects like operation type and output format, though it doesn't mention potential errors, rate limits, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately front-loaded with the core functionality, but includes an extensive example return object that could be considered verbose. While the example is helpful, it occupies significant space; the description could be more concise by summarizing the return structure rather than providing a full JSON example. Every sentence earns its place, but the structure is slightly bloated.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (1 parameter, no output schema, no annotations), the description is largely complete: it explains the purpose, parameter usage, and return content with an example. However, it lacks details on error cases, authentication, or rate limits, which would enhance completeness for a read operation in a multi-tool environment.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, so the description must compensate. It adds meaningful context by explaining that the parameter 'prompt_version_id' is used to 'Get a specific version of a prompt', and the example usage clarifies it's an ID string like 'promptversionid1234'. This provides essential semantics beyond the bare schema, though it doesn't detail format constraints or validation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get a specific version of a prompt') and identifies the resource ('using its version ID'), distinguishing it from siblings like 'get-latest-prompt' or 'get-prompt-by-identifier' which retrieve prompts differently. The verb 'Get' combined with the version-specific retrieval mechanism provides precise purpose differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'using its version ID' and providing an example with a version ID, suggesting this tool is for retrieving a known specific version rather than latest or by other identifiers. However, it does not explicitly state when NOT to use it or name alternatives like 'get-latest-prompt' or 'get-prompt-version-by-tag', leaving some guidance implicit rather than explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'

If you have feedback or need assistance with the MCP directory API, please join our Discord server