Skip to main content
Glama

generate_response

Generate AI responses by combining DeepSeek's reasoning with Claude's generation through OpenRouter, maintaining conversation context for enhanced interactions.

Instructions

Generate a response using DeepSeek's reasoning and Claude's response generation through OpenRouter.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe user's input prompt
showReasoningNoWhether to include reasoning in response
clearContextNoClear conversation history before this request
includeHistoryNoInclude Cline conversation history for context

Implementation Reference

  • Core handler function that executes the generate_response logic: clears context if needed, fetches Claude conversation history, generates reasoning using DeepSeek, generates final response using Claude, updates task status throughout.
    private async processTask(taskId: string, clearContext?: boolean, includeHistory?: boolean): Promise<void> { const task = this.activeTasks.get(taskId); if (!task) { throw new Error(`No task found with ID: ${taskId}`); } try { if (clearContext) { this.context.entries = []; } // Update status to reasoning this.activeTasks.set(taskId, { ...task, status: 'reasoning' }); // Get Cline conversation history if requested let history: ClaudeMessage[] | null = null; if (includeHistory !== false) { history = await findActiveConversation(); } // Get DeepSeek reasoning with limited history const reasoningHistory = history ? formatHistoryForModel(history, true) : ''; const reasoningPrompt = reasoningHistory ? `${reasoningHistory}\n\nNew question: ${task.prompt}` : task.prompt; const reasoning = await this.getDeepseekReasoning(reasoningPrompt); // Update status with reasoning this.activeTasks.set(taskId, { ...task, status: 'responding', reasoning }); // Get final response with full history const responseHistory = history ? formatHistoryForModel(history, false) : ''; const fullPrompt = responseHistory ? `${responseHistory}\n\nCurrent task: ${task.prompt}` : task.prompt; const response = await this.getFinalResponse(fullPrompt, reasoning); // Add to context after successful response this.addToContext({ timestamp: Date.now(), prompt: task.prompt, reasoning, response, model: CLAUDE_MODEL }); // Update status to complete this.activeTasks.set(taskId, { ...task, status: 'complete', reasoning, response, timestamp: Date.now() }); } catch (error) { // Update status to error this.activeTasks.set(taskId, { ...task, status: 'error', error: error instanceof Error ? error.message : 'Unknown error', timestamp: Date.now() }); throw error; } }
  • Entry point handler for the generate_response tool within the CallToolRequestSchema handler: validates arguments, creates and initializes asynchronous task, starts processing, returns task ID immediately.
    if (request.params.name === 'generate_response') { if (!isValidGenerateResponseArgs(request.params.arguments)) { throw new McpError( ErrorCode.InvalidParams, 'Invalid generate_response arguments' ); } const taskId = uuidv4(); const { prompt, showReasoning, clearContext, includeHistory } = request.params.arguments; // Initialize task status this.activeTasks.set(taskId, { status: 'pending', prompt, showReasoning, timestamp: Date.now() }); // Start processing in background this.processTask(taskId, clearContext, includeHistory).catch(error => { log('Error processing task:', error); this.activeTasks.set(taskId, { ...this.activeTasks.get(taskId)!, status: 'error', error: error.message }); }); // Return task ID immediately return { content: [ { type: 'text', text: JSON.stringify({ taskId }) } ] };
  • src/index.ts:242-270 (registration)
    Registration of the generate_response tool in the ListToolsRequestSchema response, defining name, description, and input schema.
    { name: 'generate_response', description: 'Generate a response using DeepSeek\'s reasoning and Claude\'s response generation through OpenRouter.', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The user\'s input prompt' }, showReasoning: { type: 'boolean', description: 'Whether to include reasoning in response', default: false }, clearContext: { type: 'boolean', description: 'Clear conversation history before this request', default: false }, includeHistory: { type: 'boolean', description: 'Include Cline conversation history for context', default: true } }, required: ['prompt'] } },
  • TypeScript interface defining the input arguments for the generate_response tool.
    interface GenerateResponseArgs { prompt: string; showReasoning?: boolean; clearContext?: boolean; includeHistory?: boolean; }
  • Validator function for GenerateResponseArgs used in the tool handler.
    const isValidGenerateResponseArgs = (args: any): args is GenerateResponseArgs => typeof args === 'object' && args !== null && typeof args.prompt === 'string' && (args.showReasoning === undefined || typeof args.showReasoning === 'boolean') && (args.clearContext === undefined || typeof args.clearContext === 'boolean') && (args.includeHistory === undefined || typeof args.includeHistory === 'boolean');
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/newideas99/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server