Skip to main content
Glama

llm-token-tracker

track_usage

Monitor and record token consumption for AI API calls from OpenAI and Anthropic models to track usage patterns and manage costs.

Instructions

Track token usage for an AI API call

Input Schema

NameRequiredDescriptionDefault
input_tokensYesInput tokens used
modelYesModel name
output_tokensYesOutput tokens used
providerYesAI provider
user_idNoOptional user ID

Input Schema (JSON Schema)

{ "properties": { "input_tokens": { "description": "Input tokens used", "type": "number" }, "model": { "description": "Model name", "type": "string" }, "output_tokens": { "description": "Output tokens used", "type": "number" }, "provider": { "description": "AI provider", "enum": [ "openai", "anthropic" ], "type": "string" }, "user_id": { "description": "Optional user ID", "type": "string" } }, "required": [ "provider", "model", "input_tokens", "output_tokens" ], "type": "object" }

Other Tools from llm-token-tracker

Related Tools

    MCP directory API

    We provide all the information about MCP servers via our MCP API.

    curl -X GET 'https://glama.ai/api/mcp/v1/servers/wn01011/llm-token-tracker'

    If you have feedback or need assistance with the MCP directory API, please join our Discord server