estimate_cost
Calculate AI request costs before execution to manage expenses. Estimates USD based on prompt length and model selection.
Instructions
Estimate the cost of an AI request before making it. Returns estimated USD cost based on prompt length and model.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The prompt to estimate cost for | |
| model | No | Model to use for estimation | claude-sonnet-4-20250514 |
| max_tokens | No | Expected max tokens in response |
Implementation Reference
- src/index.ts:298-322 (handler)The handler for the 'estimate_cost' tool - extracts arguments (prompt, model, max_tokens) and calls estimateCostFromPrompt to calculate the estimated cost, then returns a formatted response with the cost estimate.
case "estimate_cost": { const { prompt, model, max_tokens } = args as any; const cost = estimateCostFromPrompt( prompt, model || "claude-sonnet-4-20250514", max_tokens || 1024 ); return { content: [ { type: "text", text: [ `💵 Cost Estimate`, `Model: ${model || "claude-sonnet-4-20250514"}`, `Prompt length: ~${Math.ceil(prompt.length / 4)} tokens`, `Max output: ${max_tokens || 1024} tokens`, `Estimated cost: ~$${cost.toFixed(6)} USDC`, ``, `Note: Cached responses get 50% discount`, ].join("\n"), }, ], }; } - src/index.ts:77-101 (schema)The schema definition for the 'estimate_cost' tool - defines input parameters (prompt, model, max_tokens) with their types, descriptions, and defaults. Prompt is required.
{ name: "estimate_cost", description: "Estimate the cost of an AI request before making it. Returns estimated USD cost based on prompt length and model.", inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The prompt to estimate cost for", }, model: { type: "string", description: "Model to use for estimation", default: "claude-sonnet-4-20250514", }, max_tokens: { type: "number", description: "Expected max tokens in response", default: 1024, }, }, required: ["prompt"], }, }, - src/index.ts:214-227 (helper)The estimateCostFromPrompt helper function - estimates input tokens from prompt length (4 chars per token), assumes output tokens are half of max_tokens, and calls estimateCostFromTokens to calculate the final cost.
function estimateCostFromPrompt( prompt: string, model: string, maxTokens: number ): number { // Rough estimate: 4 chars per token const estimatedInputTokens = Math.ceil(prompt.length / 4); const estimatedOutputTokens = maxTokens / 2; // assume half of max return estimateCostFromTokens( model, estimatedInputTokens, estimatedOutputTokens ); } - src/index.ts:198-212 (helper)The estimateCostFromTokens helper function - calculates USD cost based on per-million-token pricing for different models (Claude Sonnet, GPT-4 Turbo), with a 20% markup noted in comments.
function estimateCostFromTokens( model: string, inputTokens: number, outputTokens: number ): number { // Pricing per 1M tokens (with 20% markup) const pricing: Record<string, { input: number; output: number }> = { "claude-sonnet-4-20250514": { input: 3.6, output: 18.0 }, "claude-3-5-sonnet-20241022": { input: 3.6, output: 18.0 }, "gpt-4-turbo": { input: 12.0, output: 36.0 }, }; const p = pricing[model] || { input: 3.6, output: 18.0 }; return (inputTokens * p.input + outputTokens * p.output) / 1_000_000; } - src/index.ts:30-112 (registration)The tools array containing all tool definitions including 'estimate_cost', which is registered along with 'ask_ai', 'check_balance', and 'list_models'.
const tools: Tool[] = [ { name: "ask_ai", description: "Send a prompt to an AI model via SolanaProx. Costs are automatically deducted from your Solana wallet balance in USDC. Supports Claude and GPT-4 models. Use this for any AI inference task.", inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The prompt or question to send to the AI model", }, model: { type: "string", description: "AI model to use. Options: claude-sonnet-4-20250514 (default), gpt-4-turbo", default: "claude-sonnet-4-20250514", }, max_tokens: { type: "number", description: "Maximum tokens in response (default: 1024, max: 4096)", default: 1024, }, system: { type: "string", description: "Optional system prompt to set context for the AI", }, }, required: ["prompt"], }, }, { name: "check_balance", description: "Check your current SolanaProx balance. Returns available USDC and SOL balance that can be used for AI requests.", inputSchema: { type: "object", properties: { wallet: { type: "string", description: "Solana wallet address to check. Defaults to configured wallet.", }, }, required: [], }, }, { name: "estimate_cost", description: "Estimate the cost of an AI request before making it. Returns estimated USD cost based on prompt length and model.", inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The prompt to estimate cost for", }, model: { type: "string", description: "Model to use for estimation", default: "claude-sonnet-4-20250514", }, max_tokens: { type: "number", description: "Expected max tokens in response", default: 1024, }, }, required: ["prompt"], }, }, { name: "list_models", description: "List all available AI models on SolanaProx with their pricing.", inputSchema: { type: "object", properties: {}, required: [], }, }, ];