consult_ai
Consult AI models via OpenRouter for tasks like coding, analysis, or questions. Auto-selects the best model or specify multiple models for sequential consultation with conversation history support.
Instructions
Consult with an AI model via OpenRouter. You can either specify a model or let the system auto-select based on your task. For sequential multi-model consultation, use the 'models' parameter to specify multiple models.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| clear_history | No | Optional: Set to true to clear the conversation history for the given conversation_id before processing this request. | |
| conversation_id | No | Optional: Conversation ID to maintain context across multiple consultations. Use the same ID for follow-up questions. | |
| model | No | Optional: Specific model to use (e.g., 'gemini-2.5-pro', 'gpt-5-codex', 'grok-code-fast-1'). If not specified, the best model will be automatically selected based on the task. | |
| models | No | Optional: Array of models to consult sequentially (e.g., ["gemini-2.5-pro", "gpt-5-codex"]). When specified, the prompt will be sent to each model in order and responses will be aggregated. This parameter takes precedence over 'model'. | |
| prompt | Yes | The question or task to send to the AI model | |
| task_description | No | Optional: Brief description of the task type to help auto-select the best model (e.g., 'coding task', 'complex analysis', 'quick question') |
Implementation Reference
- src/mcp/handlers/ToolHandler.ts:90-145 (handler)Implements the consult_ai tool handler: validates input, executes consultation via service, logs verbose details, and returns formatted JSON response.private async handleConsultAI(args: ConsultArgs): Promise<ToolResponse> { // Validate required arguments if (!args.prompt) { if (this.config.verboseLogging) { console.error("[MCP] Error: prompt is required but not provided"); } throw new Error("prompt is required"); } if (this.config.verboseLogging) { console.error("[MCP] Starting AI consultation"); console.error(`[MCP] Prompt length: ${args.prompt.length} characters`); console.error(`[MCP] Requested model: ${args.model || "auto-select"}`); console.error(`[MCP] Requested models: ${args.models ? args.models.join(", ") : "none"}`); console.error(`[MCP] Task description: ${args.task_description || "none"}`); console.error(`[MCP] Conversation ID: ${args.conversation_id || "none"}`); console.error(`[MCP] Clear history: ${args.clear_history || false}`); } // Execute consultation const startTime = Date.now(); const result = await this.consultationService.consult(args); const duration = Date.now() - startTime; // Log AI response if verbose logging is enabled if (this.config.verboseLogging) { console.error("=== AI Consultant Response ==="); console.error(`Model: ${result.model}`); console.error(`Prompt: ${args.prompt.substring(0, 200)}${args.prompt.length > 200 ? "..." : ""}`); console.error(`Response length: ${result.response.length} characters`); console.error(`Response preview: ${result.response.substring(0, 200)}${result.response.length > 200 ? "..." : ""}`); console.error(`Tokens Used: ${JSON.stringify(result.usage, null, 2)}`); console.error(`Conversation ID: ${args.conversation_id || "N/A"}`); console.error(`Cached: ${result.model.includes("(cached)")}`); console.error(`Duration: ${duration}ms`); console.error("=============================="); } // Format response const response: ConsultResponse = { model_used: result.model, response: result.response, tokens_used: result.usage, conversation_id: args.conversation_id || null, cached: result.model.includes("(cached)"), }; return { content: [ { type: "text" as const, text: JSON.stringify(response, null, 2), }, ], }; }
- src/mcp/ToolDefinitions.ts:13-55 (schema)Defines the input schema, description, and parameters for the consult_ai tool, returned by getToolDefinitions.{ name: "consult_ai", description: "Consult with an AI model via OpenRouter. You can either specify a model or let the system auto-select based on your task. For sequential multi-model consultation, use the 'models' parameter to specify multiple models.", inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The question or task to send to the AI model", }, model: { type: "string", description: `Optional: Specific model to use (e.g., ${modelNames.map((m) => `'${m}'`).join(", ")}). If not specified, the best model will be automatically selected based on the task.`, enum: modelNames, }, models: { type: "array", description: `Optional: Array of models to consult sequentially (e.g., ["gemini-2.5-pro", "gpt-5-codex"]). When specified, the prompt will be sent to each model in order and responses will be aggregated. This parameter takes precedence over 'model'.`, items: { type: "string", enum: modelNames, }, }, task_description: { type: "string", description: "Optional: Brief description of the task type to help auto-select the best model (e.g., 'coding task', 'complex analysis', 'quick question')", }, conversation_id: { type: "string", description: "Optional: Conversation ID to maintain context across multiple consultations. Use the same ID for follow-up questions.", }, clear_history: { type: "boolean", description: "Optional: Set to true to clear the conversation history for the given conversation_id before processing this request.", }, }, required: ["prompt"], }, },
- src/mcp/MCPServer.ts:61-74 (registration)Registers the consult_ai tool by providing its definition via ListTools MCP handler using getToolDefinitions.this.server.setRequestHandler(ListToolsRequestSchema, async () => { if (this.config.verboseLogging) { console.error("[MCP Server] Received ListTools request"); } const modelNames = this.consultationService.listModels().map((m) => m.name); const tools = getToolDefinitions(modelNames); if (this.config.verboseLogging) { console.error(`[MCP Server] Returning ${tools.length} tool definitions`); } return { tools }; });
- src/mcp/MCPServer.ts:77-87 (registration)Registers the tool execution handler by delegating CallTool requests to ToolHandler.handleToolCall.this.server.setRequestHandler( CallToolRequestSchema, async (request: CallToolRequest) => { if (this.config.verboseLogging) { console.error("[MCP Server] Received CallTool request"); } const result = await this.toolHandler.handleToolCall(request); return result as any; // MCP SDK type compatibility }, );
- src/types/index.ts:63-70 (schema)Type definition for the input arguments of the consult_ai tool.export interface ConsultArgs { prompt: string; model?: string; models?: string[]; // For sequential multi-model consultation task_description?: string; conversation_id?: string; clear_history?: boolean; }