Skip to main content
Glama

relay_models_list

List available AI models with capabilities and pricing to check valid model IDs before testing. Filter by provider to identify cost-effective options for workflow orchestration.

Instructions

List available AI models with capabilities and pricing. Use to check valid model IDs before testing. Cost shows provider pricing (OpenAI/Anthropic) - RelayPlane is BYOK, we don't charge for API usage.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
providerNoFilter by provider (optional)

Implementation Reference

  • The core handler function for relay_models_list that constructs the list of available models by combining pricing data, metadata, and configuration status, filters by provider if specified, sorts the results, and returns them.
    export async function relayModelsList(
      input: RelayModelsListInput
    ): Promise<RelayModelsListResponse> {
      const models: ModelInfo[] = [];
    
      for (const [modelId, pricing] of Object.entries(PRICING)) {
        const [provider] = modelId.split(':');
        const metadata = MODEL_METADATA[modelId];
    
        // Filter by provider if specified
        if (input.provider && input.provider !== 'all' && provider !== input.provider) {
          continue;
        }
    
        if (metadata) {
          models.push({
            id: modelId,
            provider,
            name: metadata.name,
            capabilities: metadata.capabilities,
            contextWindow: metadata.contextWindow,
            inputCostPer1kTokens: pricing.input,
            outputCostPer1kTokens: pricing.output,
            configured: isProviderConfigured(provider),
          });
        }
      }
    
      // Sort by provider, then by name
      models.sort((a, b) => {
        if (a.provider !== b.provider) {
          return a.provider.localeCompare(b.provider);
        }
        return a.name.localeCompare(b.name);
      });
    
      return { models };
    }
  • Zod schema for input validation, used in the server to parse arguments before calling the handler.
    export const relayModelsListSchema = z.object({
      provider: z
        .enum(['openai', 'anthropic', 'google', 'xai', 'all'])
        .optional()
        .describe('Filter by provider'),
    });
  • MCP tool definition with name, description, and input schema structure, used for tool listing.
    export const relayModelsListDefinition = {
      name: 'relay_models_list',
      description:
        'List available AI models with capabilities and pricing. Use to check valid model IDs before testing. Cost shows provider pricing (OpenAI/Anthropic) - RelayPlane is BYOK, we don\'t charge for API usage.',
      inputSchema: {
        type: 'object' as const,
        properties: {
          provider: {
            type: 'string',
            enum: ['openai', 'anthropic', 'google', 'xai', 'all'],
            description: 'Filter by provider (optional)',
          },
        },
      },
    };
  • src/server.ts:59-67 (registration)
    Registration of the tool definition in the TOOLS array, returned by listTools MCP request.
    const TOOLS = [
      relayModelsListDefinition,
      relayRunDefinition,
      relayWorkflowRunDefinition,
      relayWorkflowValidateDefinition,
      relaySkillsListDefinition,
      relayRunsListDefinition,
      relayRunGetDefinition,
    ];
  • src/server.ts:109-112 (registration)
    Dispatch logic in the callTool MCP request handler that routes to the specific tool handler after schema validation.
    case 'relay_models_list': {
      const parsed = relayModelsListSchema.parse(args || {});
      result = await relayModelsList(parsed);
      break;
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses key behavioral traits: the tool returns pricing information and clarifies RelayPlane's BYOK (Bring Your Own Key) model with no API usage charges. However, it doesn't mention response format, pagination, or error behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states purpose and key attributes, the second clarifies pricing context. It's front-loaded with essential information and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with one optional parameter and no output schema, the description is reasonably complete. It covers purpose, usage context, and pricing model, though it could benefit from mentioning response structure or example output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the optional 'provider' parameter with its enum values. The description doesn't add any parameter-specific details beyond what the schema provides, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('available AI models') with specific attributes ('capabilities and pricing'). It distinguishes from siblings by focusing on model metadata rather than execution or workflow tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit context for when to use ('to check valid model IDs before testing'), which helps guide selection. However, it doesn't mention when NOT to use this tool or name specific alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RelayPlane/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server