ask_ai
Send prompts to Claude or GPT-4 AI models using Solana USDC payments from your wallet. Process AI inference tasks with automated cost deduction.
Instructions
Send a prompt to an AI model via SolanaProx. Costs are automatically deducted from your Solana wallet balance in USDC. Supports Claude and GPT-4 models. Use this for any AI inference task.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The prompt or question to send to the AI model | |
| model | No | AI model to use. Options: claude-sonnet-4-20250514 (default), gpt-4-turbo | claude-sonnet-4-20250514 |
| max_tokens | No | Maximum tokens in response (default: 1024, max: 4096) | |
| system | No | Optional system prompt to set context for the AI |
Implementation Reference
- src/index.ts:254-272 (handler)The handler for the 'ask_ai' tool - extracts arguments (prompt, model, max_tokens, system), calls the callAI helper function, and returns the AI response with cost information.
case "ask_ai": { const { prompt, model, max_tokens, system } = args as any; const result = await callAI( prompt, model || "claude-sonnet-4-20250514", max_tokens || 1024, system ); return { content: [ { type: "text", text: `${result.response}\n\n---\n⚡ Powered by SolanaProx | Model: ${result.model} | Cost: ~$${result.cost_usd.toFixed(6)} USDC`, }, ], }; } - src/index.ts:30-60 (schema)The tool definition/schema for 'ask_ai' including name, description, and inputSchema with properties for prompt (required), model, max_tokens, and system prompt.
const tools: Tool[] = [ { name: "ask_ai", description: "Send a prompt to an AI model via SolanaProx. Costs are automatically deducted from your Solana wallet balance in USDC. Supports Claude and GPT-4 models. Use this for any AI inference task.", inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The prompt or question to send to the AI model", }, model: { type: "string", description: "AI model to use. Options: claude-sonnet-4-20250514 (default), gpt-4-turbo", default: "claude-sonnet-4-20250514", }, max_tokens: { type: "number", description: "Maximum tokens in response (default: 1024, max: 4096)", default: 1024, }, system: { type: "string", description: "Optional system prompt to set context for the AI", }, }, required: ["prompt"], }, }, - src/index.ts:118-172 (helper)The callAI helper function that makes the actual HTTP POST request to the SolanaProx API endpoint (/v1/messages), handles authentication via X-Wallet-Address header, processes errors including insufficient balance (402), and returns response text with cost estimation.
async function callAI( prompt: string, model: string = "claude-sonnet-4-20250514", maxTokens: number = 1024, system?: string ): Promise<{ response: string; cost_usd: number; model: string }> { const messages: any[] = [{ role: "user", content: prompt }]; const body: any = { model, max_tokens: maxTokens, messages, }; if (system) { body.system = system; } const res = await fetch(`${SOLANAPROX_URL}/v1/messages`, { method: "POST", headers: { "Content-Type": "application/json", "X-Wallet-Address": WALLET_ADDRESS, }, body: JSON.stringify(body), }); if (!res.ok) { const error = await res.text(); if (res.status === 402) { throw new Error( `Insufficient balance. Deposit USDC at ${SOLANAPROX_URL} to continue. Wallet: ${WALLET_ADDRESS}` ); } throw new Error(`SolanaProx API error (${res.status}): ${error}`); } const data = await res.json() as any; const responseText = data.content?.[0]?.text || data.choices?.[0]?.message?.content || JSON.stringify(data); // Estimate cost from usage if available const inputTokens = data.usage?.input_tokens || 0; const outputTokens = data.usage?.output_tokens || 0; const costUSD = estimateCostFromTokens(model, inputTokens, outputTokens); return { response: responseText, cost_usd: costUSD, model: data.model || model, }; } - src/index.ts:198-212 (helper)The estimateCostFromTokens helper function that calculates the cost in USD based on the model, input tokens, and output tokens using predefined pricing rates.
function estimateCostFromTokens( model: string, inputTokens: number, outputTokens: number ): number { // Pricing per 1M tokens (with 20% markup) const pricing: Record<string, { input: number; output: number }> = { "claude-sonnet-4-20250514": { input: 3.6, output: 18.0 }, "claude-3-5-sonnet-20241022": { input: 3.6, output: 18.0 }, "gpt-4-turbo": { input: 12.0, output: 36.0 }, }; const p = pricing[model] || { input: 3.6, output: 18.0 }; return (inputTokens * p.input + outputTokens * p.output) / 1_000_000; } - src/index.ts:249-258 (registration)The CallToolRequestSchema handler that routes tool calls - this is where 'ask_ai' is registered as a case in the switch statement to dispatch to its handler.
server.setRequestHandler(CallToolRequestSchema, async (request) => { const { name, arguments: args } = request.params; try { switch (name) { case "ask_ai": { const { prompt, model, max_tokens, system } = args as any; const result = await callAI( prompt,