Skip to main content
Glama

get_cost_analysis

Analyze AI usage costs by model, user, and daily trends to identify spending patterns and optimize resource allocation.

Instructions

Specialized cost breakdowns by model, user, and daily trends.

Input Schema

NameRequiredDescriptionDefault
fromYesStart timestamp (ISO 8601)
toYesEnd timestamp (ISO 8601)
environmentNoOptional environment filter
includeModelBreakdownNoInclude breakdown by model (default: true)
includeUserBreakdownNoInclude breakdown by user (default: true)
includeDailyBreakdownNoInclude daily breakdown (default: true)
limitNoMaximum items per breakdown (default: 20)

Input Schema (JSON Schema)

{ "properties": { "environment": { "description": "Optional environment filter", "type": "string" }, "from": { "description": "Start timestamp (ISO 8601)", "format": "date-time", "type": "string" }, "includeDailyBreakdown": { "description": "Include daily breakdown (default: true)", "type": "boolean" }, "includeModelBreakdown": { "description": "Include breakdown by model (default: true)", "type": "boolean" }, "includeUserBreakdown": { "description": "Include breakdown by user (default: true)", "type": "boolean" }, "limit": { "description": "Maximum items per breakdown (default: 20)", "maximum": 100, "minimum": 5, "type": "number" }, "to": { "description": "End timestamp (ISO 8601)", "format": "date-time", "type": "string" } }, "required": [ "from", "to" ], "type": "object" }

Implementation Reference

  • The main handler function that executes the get_cost_analysis tool. It fetches daily metrics from Langfuse, computes total cost, model breakdown, user breakdown, and daily breakdown, then returns formatted JSON response.
    export async function getCostAnalysis( client: LangfuseAnalyticsClient, args: z.infer<typeof getCostAnalysisSchema> ) { const filters: any[] = []; if (args.environment) { filters.push({ column: 'environment', operator: 'equals', value: args.environment, type: 'string', }); } // Calculate total cost from daily data (which works correctly) let totalCost = 0; let dailyData: any[] = []; // Get daily data first since it's working correctly try { const dailyResponse = await client.getDailyMetrics({ tags: args.environment ? [`environment:${args.environment}`] : undefined, }); if (dailyResponse.data && Array.isArray(dailyResponse.data)) { // Filter by date range and calculate total const fromDate = new Date(args.from); const toDate = new Date(args.to); dailyData = dailyResponse.data.filter((day: any) => { const dayDate = new Date(day.date); return dayDate >= fromDate && dayDate <= toDate; }); // Calculate total cost from working daily data totalCost = dailyData.reduce((sum: number, day: any) => { return sum + (day.totalCost || 0); }, 0); } } catch (error) { console.error('Error getting daily data for total calculation:', error); } const result: CostAnalysis = { projectId: client.getProjectId(), from: args.from, to: args.to, totalCost, breakdown: {}, }; // Model breakdown - extract from working daily data if (args.includeModelBreakdown) { try { const modelMap = new Map<string, { cost: number; tokens: number; observations: number; }>(); // Aggregate model data from daily breakdown (which works correctly) dailyData.forEach((day: any) => { if (day.usage && Array.isArray(day.usage)) { day.usage.forEach((usage: any) => { const modelName = usage.model || 'unknown'; const existing = modelMap.get(modelName) || { cost: 0, tokens: 0, observations: 0 }; modelMap.set(modelName, { cost: existing.cost + (usage.totalCost || 0), tokens: existing.tokens + (usage.totalUsage || usage.inputUsage + usage.outputUsage || 0), observations: existing.observations + (usage.countObservations || 0), }); }); } }); const modelBreakdown = Array.from(modelMap.entries()).map(([model, data]) => ({ model, cost: data.cost, tokens: data.tokens, observations: data.observations, percentage: totalCost > 0 ? Math.round((data.cost / totalCost) * 100 * 100) / 100 : 0, })); result.breakdown.byModel = modelBreakdown .sort((a, b) => b.cost - a.cost) .slice(0, args.limit); } catch (error) { console.error('Error building model breakdown from daily data:', error); result.breakdown.byModel = []; } } // User breakdown if (args.includeUserBreakdown) { try { const userResponse = await client.getMetrics({ view: 'traces', from: args.from, to: args.to, metrics: [ { measure: 'totalCost', aggregation: 'sum' }, { measure: 'totalTokens', aggregation: 'sum' }, { measure: 'count', aggregation: 'count' }, ], dimensions: [{ field: 'userId' }], filters, }); const userBreakdown: Array<{ userId: string; cost: number; tokens: number; traces: number; percentage: number; }> = []; if (userResponse.data && Array.isArray(userResponse.data)) { userResponse.data.forEach((row: any, index: number) => { if (row.userId) { // Use correct field names from metrics API response const cost = row.totalCost_sum || 0; userBreakdown.push({ userId: row.userId, cost, tokens: row.totalTokens_sum || 0, traces: row.count_count || 0, percentage: totalCost > 0 ? Math.round((cost / totalCost) * 100 * 100) / 100 : 0, }); } }); } result.breakdown.byUser = userBreakdown .sort((a, b) => b.cost - a.cost) .slice(0, args.limit); } catch (error) { console.error('Error fetching user breakdown:', error); result.breakdown.byUser = []; } } // Daily breakdown - reuse the daily data we already fetched if (args.includeDailyBreakdown) { try { const dailyBreakdown = dailyData.map((day: any) => { // Calculate total tokens from usage breakdown let totalTokens = 0; if (day.usage && Array.isArray(day.usage)) { totalTokens = day.usage.reduce((sum: number, usage: any) => { return sum + (usage.totalUsage || usage.inputUsage + usage.outputUsage || 0); }, 0); } return { date: day.date, cost: day.totalCost || 0, tokens: totalTokens, traces: day.countTraces || 0, }; }); result.breakdown.byDay = dailyBreakdown.sort((a, b) => new Date(a.date).getTime() - new Date(b.date).getTime() ); } catch (error) { console.error('Error building daily breakdown:', error); result.breakdown.byDay = []; } } return { content: [ { type: 'text' as const, text: JSON.stringify(result, null, 2), }, ], }; }
  • Zod schema defining the input parameters for the get_cost_analysis tool, used for validation in the dispatcher.
    export const getCostAnalysisSchema = z.object({ from: z.string().datetime(), to: z.string().datetime(), environment: z.string().optional(), includeModelBreakdown: z.boolean().default(true), includeUserBreakdown: z.boolean().default(true), includeDailyBreakdown: z.boolean().default(true), limit: z.number().min(5).max(100).default(20), });
  • src/index.ts:455-496 (registration)
    Tool registration object in the listTools handler's allTools array, providing name, description, and input schema for MCP tool discovery.
    { name: 'get_cost_analysis', description: 'Specialized cost breakdowns by model, user, and daily trends.', inputSchema: { type: 'object', properties: { from: { type: 'string', format: 'date-time', description: 'Start timestamp (ISO 8601)', }, to: { type: 'string', format: 'date-time', description: 'End timestamp (ISO 8601)', }, environment: { type: 'string', description: 'Optional environment filter', }, includeModelBreakdown: { type: 'boolean', description: 'Include breakdown by model (default: true)', }, includeUserBreakdown: { type: 'boolean', description: 'Include breakdown by user (default: true)', }, includeDailyBreakdown: { type: 'boolean', description: 'Include daily breakdown (default: true)', }, limit: { type: 'number', minimum: 5, maximum: 100, description: 'Maximum items per breakdown (default: 20)', }, }, required: ['from', 'to'], }, },
  • src/index.ts:1051-1054 (registration)
    Dispatch case in the CallToolRequestSchema handler that validates input with the schema and invokes the handler function.
    case 'get_cost_analysis': { const args = getCostAnalysisSchema.parse(request.params.arguments); return await getCostAnalysis(this.client, args); }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/therealsachin/langfuse-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server