Skip to main content
Glama
wn01011

llm-token-tracker

track_usage

Monitor AI API token consumption by recording provider, model, and input/output token counts to track usage patterns and costs.

Instructions

Track token usage for an AI API call

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
providerYesAI provider
modelYesModel name
input_tokensYesInput tokens used
output_tokensYesOutput tokens used
user_idNoOptional user ID

Implementation Reference

  • The main handler function that executes the 'track_usage' tool logic. It tracks input/output tokens for a given provider/model/user using the TokenTracker, calculates session cost, and returns a formatted text response.
    private trackUsage(args: any) { const { provider, model, input_tokens, output_tokens, user_id = 'current-session' } = args; const trackingId = this.tracker.startTracking(user_id); this.tracker.endTracking(trackingId, { provider: provider as 'openai' | 'anthropic' | 'gemini', model, inputTokens: input_tokens, outputTokens: output_tokens, totalTokens: input_tokens + output_tokens }); const usage = this.tracker.getUserUsage(user_id); const totalTokens = input_tokens + output_tokens; const cost = usage?.totalCost || 0; return { content: [ { type: 'text', text: `✅ Tracked ${totalTokens.toLocaleString()} tokens for ${model}\n` + `💰 Session Cost: ${formatCost(cost)}\n` + `📊 Total: ${usage?.totalTokens.toLocaleString() || 0} tokens` } ] }; }
  • Input schema definition for the 'track_usage' tool, specifying required properties like provider, model, input_tokens, output_tokens.
    inputSchema: { type: 'object', properties: { provider: { type: 'string', enum: ['openai', 'anthropic', 'gemini'], description: 'AI provider' }, model: { type: 'string', description: 'Model name' }, input_tokens: { type: 'number', description: 'Input tokens used' }, output_tokens: { type: 'number', description: 'Output tokens used' }, user_id: { type: 'string', description: 'Optional user ID' } }, required: ['provider', 'model', 'input_tokens', 'output_tokens'] }
  • Registers the 'track_usage' tool in the MCP server's listTools response, including name, description, and input schema.
    { name: 'track_usage', description: 'Track token usage for an AI API call', inputSchema: { type: 'object', properties: { provider: { type: 'string', enum: ['openai', 'anthropic', 'gemini'], description: 'AI provider' }, model: { type: 'string', description: 'Model name' }, input_tokens: { type: 'number', description: 'Input tokens used' }, output_tokens: { type: 'number', description: 'Output tokens used' }, user_id: { type: 'string', description: 'Optional user ID' } }, required: ['provider', 'model', 'input_tokens', 'output_tokens'] } },
  • Dispatches 'track_usage' tool calls to the trackUsage handler method in the CallToolRequestHandler switch statement.
    case 'track_usage': return this.trackUsage(request.params.arguments);

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/wn01011/llm-token-tracker'

If you have feedback or need assistance with the MCP directory API, please join our Discord server