generate_response
Generate AI responses by combining DeepSeek's structured reasoning with Claude's response generation to produce well-considered outputs for user prompts.
Instructions
Generate a response using DeepSeek's reasoning and Claude's response generation through OpenRouter.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The user's input prompt | |
| showReasoning | No | Whether to include reasoning in response | |
| clearContext | No | Clear conversation history before this request | |
| includeHistory | No | Include Cline conversation history for context |
Implementation Reference
- src/index.ts:355-401 (handler)Entry point handler for 'generate_response' tool: validates arguments, creates asynchronous task, initiates background processing via processTask, and immediately returns task ID for polling.if (request.params.name === "generate_response") { if (!isValidGenerateResponseArgs(request.params.arguments)) { throw new McpError( ErrorCode.InvalidParams, "Invalid generate_response arguments" ); } const taskId = uuidv4(); const { prompt, showReasoning, clearContext, includeHistory } = request.params.arguments; // Initialize task status with les propriétés de suivi pour le polling this.activeTasks.set(taskId, { status: "pending", prompt, showReasoning, timestamp: Date.now(), lastChecked: Date.now(), nextCheckDelay: INITIAL_STATUS_CHECK_DELAY_MS, checkAttempts: 0 }); // Start processing in background this.processTask(taskId, clearContext, includeHistory).catch( (error) => { log("Error processing task:", error); this.activeTasks.set(taskId, { ...this.activeTasks.get(taskId)!, status: "error", error: error.message, }); } ); // Return task ID immediately return { content: [ { type: "text", text: JSON.stringify({ taskId, suggestedWaitTime: Math.round(INITIAL_STATUS_CHECK_DELAY_MS / 1000) // Temps suggéré en secondes }), }, ], };
- src/index.ts:511-590 (handler)Core asynchronous handler that executes the generate_response logic: manages context, fetches conversation history, generates reasoning with DeepSeek, produces final response, updates task status throughout the process.private async processTask( taskId: string, clearContext?: boolean, includeHistory?: boolean ): Promise<void> { const task = this.activeTasks.get(taskId); if (!task) { throw new Error(`No task found with ID: ${taskId}`); } try { if (clearContext) { this.context.entries = []; } // Update status to reasoning this.activeTasks.set(taskId, { ...task, status: "reasoning", }); // Get Cline conversation history if requested let history: ClaudeMessage[] | null = null; if (includeHistory !== false) { history = await findActiveConversation(); } // Get DeepSeek reasoning with limited history const reasoningHistory = history ? formatHistoryForModel(history, true) : ""; const reasoningPrompt = reasoningHistory ? `${reasoningHistory}\n\nNew question: ${task.prompt}` : task.prompt; const reasoning = await this.getDeepseekReasoning(reasoningPrompt); // Update status with reasoning this.activeTasks.set(taskId, { ...task, status: "responding", reasoning, }); // Get final response with full history const responseHistory = history ? formatHistoryForModel(history, false) : ""; const fullPrompt = responseHistory ? `${responseHistory}\n\nCurrent task: ${task.prompt}` : task.prompt; const response = await this.getFinalResponse(fullPrompt, reasoning); // Add to context after successful response this.addToContext({ timestamp: Date.now(), prompt: task.prompt, reasoning, response, model: DEEPSEEK_MODEL, // Utiliser DEEPSEEK_MODEL au lieu de CLAUDE_MODEL }); // Update status to complete this.activeTasks.set(taskId, { ...task, status: "complete", reasoning, response, timestamp: Date.now(), }); } catch (error) { // Update status to error this.activeTasks.set(taskId, { ...task, status: "error", error: error instanceof Error ? error.message : "Unknown error", timestamp: Date.now(), }); throw error; } }
- src/index.ts:592-630 (handler)Handler for generating step-by-step reasoning using DeepSeek model via OpenRouter API, incorporating conversation context.private async getDeepseekReasoning(prompt: string): Promise<string> { const contextPrompt = this.context.entries.length > 0 ? `Previous conversation:\n${this.formatContextForPrompt()}\n\nNew question: ${prompt}` : prompt; try { // Ajouter instruction explicite pour que le modèle génère un raisonnement const requestPrompt = `Analyse la question suivante en détail avant de répondre. Réfléchis étape par étape et expose ton raisonnement complet.\n\n${contextPrompt}`; // Get reasoning from DeepSeek (sans le paramètre include_reasoning) const response = await this.openrouterClient.chat.completions.create({ model: DEEPSEEK_MODEL, messages: [ { role: "user", content: requestPrompt, }, ], temperature: 0.7, top_p: 1, }); // Utiliser directement le contenu de la réponse comme raisonnement if ( !response.choices || !response.choices[0] || !response.choices[0].message || !response.choices[0].message.content ) { throw new Error("Réponse vide de DeepSeek"); } return response.choices[0].message.content; } catch (error) { log("Error in getDeepseekReasoning:", error); throw error; } }
- src/index.ts:632-657 (handler)Handler for generating the final polished response using DeepSeek model, based on the initial prompt and prior reasoning.private async getFinalResponse( prompt: string, reasoning: string ): Promise<string> { try { // Au lieu d'envoyer à Claude, on utilise DeepSeek pour la réponse finale aussi const response = await this.openrouterClient.chat.completions.create({ model: DEEPSEEK_MODEL, // Utiliser DeepSeek ici messages: [ { role: "user", content: `${prompt}\n\nVoici mon analyse préalable de cette question: ${reasoning}\nMaintenant, génère une réponse complète et détaillée basée sur cette analyse.`, }, ], temperature: 0.7, top_p: 1, } as any); return ( response.choices[0].message.content || "Error: No response content" ); } catch (error) { log("Error in getFinalResponse:", error); throw error; } }
- src/index.ts:54-59 (schema)TypeScript interface defining the input schema for generate_response tool arguments.interface GenerateResponseArgs { prompt: string; showReasoning?: boolean; clearContext?: boolean; includeHistory?: boolean; }
- src/index.ts:307-336 (registration)Tool registration in ListToolsRequestSchema handler, defining name, description, and JSON input schema for generate_response.{ name: "generate_response", description: "Generate a response using DeepSeek's reasoning and Claude's response generation through OpenRouter.", inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The user's input prompt", }, showReasoning: { type: "boolean", description: "Whether to include reasoning in response", default: false, }, clearContext: { type: "boolean", description: "Clear conversation history before this request", default: false, }, includeHistory: { type: "boolean", description: "Include Cline conversation history for context", default: true, }, }, required: ["prompt"], }, },
- src/index.ts:98-106 (schema)Runtime validator function for GenerateResponseArgs input schema used in the tool handler.const isValidGenerateResponseArgs = (args: any): args is GenerateResponseArgs => typeof args === "object" && args !== null && typeof args.prompt === "string" && (args.showReasoning === undefined || typeof args.showReasoning === "boolean") && (args.clearContext === undefined || typeof args.clearContext === "boolean") && (args.includeHistory === undefined || typeof args.includeHistory === "boolean");