Skip to main content
Glama

list_models

Retrieve available AI models in a Langfuse project to analyze usage, costs, and performance across different models.

Instructions

List all available AI models in the Langfuse project.

Input Schema

NameRequiredDescriptionDefault
limitNoMaximum number of models to return (default: 50)
pageNoPage number for pagination

Input Schema (JSON Schema)

{ "properties": { "limit": { "description": "Maximum number of models to return (default: 50)", "type": "number" }, "page": { "description": "Page number for pagination", "type": "number" } }, "type": "object" }

Implementation Reference

  • The main handler function that executes the list_models tool logic. It invokes the client's listModels method with provided arguments, formats the response as JSON text content, and handles errors appropriately.
    export async function listModels( client: LangfuseAnalyticsClient, args: ListModelsArgs = {} ) { try { const modelsData = await client.listModels(args); return { content: [ { type: 'text' as const, text: JSON.stringify(modelsData, null, 2), }, ], }; } catch (error) { const errorMessage = error instanceof Error ? error.message : String(error); return { content: [ { type: 'text' as const, text: `Error listing models: ${errorMessage}`, }, ], isError: true, }; } }
  • Zod schema defining the input parameters for the list_models tool: optional limit and page for pagination.
    export const listModelsSchema = z.object({ limit: z.number().optional().describe('Maximum number of models to return (default: 50)'), page: z.number().optional().describe('Page number for pagination'), });
  • src/index.ts:548-564 (registration)
    Tool registration in the allTools array used by listToolsRequestHandler, defining the name, description, and inputSchema for list_models.
    { name: 'list_models', description: 'List all available AI models in the Langfuse project.', inputSchema: { type: 'object', properties: { limit: { type: 'number', description: 'Maximum number of models to return (default: 50)', }, page: { type: 'number', description: 'Page number for pagination', }, }, }, },
  • src/index.ts:1072-1075 (registration)
    Dispatch case in the CallToolRequestSchema handler that parses arguments using the schema and invokes the listModels handler function.
    case 'list_models': { const args = listModelsSchema.parse(request.params.arguments); return await listModels(this.client, args); }
  • Underlying client method that makes the HTTP request to Langfuse API /api/public/models to fetch the list of models, used by the tool handler.
    async listModels(params: { limit?: number; page?: number; }): Promise<any> { const queryParams = new URLSearchParams(); if (params.limit) queryParams.append('limit', params.limit.toString()); if (params.page) queryParams.append('page', params.page.toString()); const authHeader = 'Basic ' + Buffer.from( `${this.config.publicKey}:${this.config.secretKey}` ).toString('base64'); const response = await fetch(`${this.config.baseUrl}/api/public/models?${queryParams}`, { headers: { 'Authorization': authHeader, }, }); if (!response.ok) { await this.handleApiError(response, 'List Models'); } return await response.json(); }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/therealsachin/langfuse-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server