Skip to main content
Glama

count_tokens

Calculate token count for text using specific Gemini AI models to manage input length and API usage.

Instructions

Count tokens for a given text with a specific model

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYesText to count tokens for
modelNoModel to use for token countinggemini-2.5-flash

Implementation Reference

  • The primary handler function for the 'count_tokens' tool. It uses the Google Generative AI SDK to count tokens in the provided text using the specified model (default: gemini-2.5-flash) and returns the token count in the MCP response format.
    private async countTokens(id: any, args: any): Promise<MCPResponse> { try { const model = args.model || 'gemini-2.5-flash'; const result = await this.genAI.models.countTokens({ model, contents: [{ parts: [{ text: args.text }], role: 'user' }] }); return { jsonrpc: '2.0', id, result: { content: [{ type: 'text', text: `Token count: ${result.totalTokens}` }], metadata: { tokenCount: result.totalTokens, model } } }; } catch (error) { return { jsonrpc: '2.0', id, error: { code: -32603, message: error instanceof Error ? error.message : 'Internal error' } }; }
  • Input schema specification for the count_tokens tool, defining required 'text' parameter and optional 'model' with enum from available GEMINI_MODELS.
    inputSchema: { type: 'object', properties: { text: { type: 'string', description: 'Text to count tokens for' }, model: { type: 'string', description: 'Model to use for token counting', enum: Object.keys(GEMINI_MODELS), default: 'gemini-2.5-flash' } }, required: ['text']
  • Registration of the 'count_tokens' tool in the getAvailableTools() method's return array, including name, description, and input schema.
    name: 'count_tokens', description: 'Count tokens for a given text with a specific model', inputSchema: { type: 'object', properties: { text: { type: 'string', description: 'Text to count tokens for' }, model: { type: 'string', description: 'Model to use for token counting', enum: Object.keys(GEMINI_MODELS), default: 'gemini-2.5-flash' } }, required: ['text'] }
  • Dispatch case in handleToolCall() method that routes 'count_tokens' calls to the countTokens handler.
    case 'count_tokens': return await this.countTokens(request.id, args);

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aliargun/mcp-server-gemini'

If you have feedback or need assistance with the MCP directory API, please join our Discord server