ai_llm_response
Send a prompt to ChatGPT, Claude, Gemini, or Perplexity and receive a structured AI response. Compare outputs across platforms for local SEO research.
Instructions
Query a specific LLM (ChatGPT, Claude, Gemini, Perplexity) and get its structured response. See what each AI says about a topic. Costs 8 credits.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | Prompt to send to the LLM (e.g. "What is the best plumber in Portland?"). Max 500 characters. | |
| platform | Yes | Which LLM to query | |
| model | No | Optional model name (e.g. gpt-4o, claude-sonnet-4-20250514, gemini-2.5-flash, sonar). Defaults to latest for each platform. |
Implementation Reference
- src/tools/ai-visibility.ts:129-146 (registration)The tool 'ai_llm_response' is registered via server.tool() inside registerAIVisibilityTools(). It defines the tool name, description, input schema, and handler.
server.tool( "ai_llm_response", "Query a specific LLM (ChatGPT, Claude, Gemini, Perplexity) and get its structured response. See what each AI says about a topic. Costs 8 credits.", { prompt: z.string().min(1).max(500).describe('Prompt to send to the LLM (e.g. "What is the best plumber in Portland?"). Max 500 characters.'), platform: z.enum(["chat_gpt", "claude", "gemini", "perplexity"]).describe("Which LLM to query"), model: z.string().max(100).optional().describe("Optional model name (e.g. gpt-4o, claude-sonnet-4-20250514, gemini-2.5-flash, sonar). Defaults to latest for each platform."), }, READ_ONLY, withErrorHandling(async ({ prompt, platform, model }) => { const result = await callApi( "/v1/ai/llm-response", { prompt, platform, model }, getAuth() ); return { content: [{ type: "text" as const, text: formatResult(result.data, result) }] }; }) ); - src/tools/ai-visibility.ts:132-136 (schema)Input schema defined with Zod: prompt (string, 1-500 chars), platform (enum: chat_gpt, claude, gemini, perplexity), and optional model (string, max 100 chars).
{ prompt: z.string().min(1).max(500).describe('Prompt to send to the LLM (e.g. "What is the best plumber in Portland?"). Max 500 characters.'), platform: z.enum(["chat_gpt", "claude", "gemini", "perplexity"]).describe("Which LLM to query"), model: z.string().max(100).optional().describe("Optional model name (e.g. gpt-4o, claude-sonnet-4-20250514, gemini-2.5-flash, sonar). Defaults to latest for each platform."), }, - src/tools/ai-visibility.ts:138-145 (handler)The handler function (wrapped with withErrorHandling) calls the API endpoint /v1/ai/llm-response with prompt, platform, and model, then formats and returns the result.
withErrorHandling(async ({ prompt, platform, model }) => { const result = await callApi( "/v1/ai/llm-response", { prompt, platform, model }, getAuth() ); return { content: [{ type: "text" as const, text: formatResult(result.data, result) }] }; }) - src/api-client.ts:143-158 (helper)withErrorHandling wraps the handler to catch errors and return them as MCP error content.
export function withErrorHandling<T>( fn: (args: T) => Promise<ToolResult> ): (args: T) => Promise<ToolResult> { return async (args) => { try { return await fn(args); } catch (err) { const message = err instanceof Error ? err.message : String(err); console.error(`[mcp] Tool error: ${message}`); return { content: [{ type: "text" as const, text: `Error: ${message}` }], isError: true, }; } }; } - src/api-client.ts:132-138 (helper)formatResult formats the API response data along with credit usage metadata into a text string.
export function formatResult( data: unknown, meta: { credits_used: number; credits_remaining: number; cached: boolean } ): string { const metaLine = `[${meta.credits_used} credit${meta.credits_used !== 1 ? "s" : ""} used | ${meta.credits_remaining} remaining${meta.cached ? " | cached" : ""}]`; return `${metaLine}\n\n${JSON.stringify(data, null, 2)}`; }