Skip to main content
Glama
niko91i

Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP

by niko91i

generate_response

Generate AI responses by combining DeepSeek's structured reasoning with Claude's response generation to produce well-considered outputs for user prompts.

Instructions

Generate a response using DeepSeek's reasoning and Claude's response generation through OpenRouter.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe user's input prompt
showReasoningNoWhether to include reasoning in response
clearContextNoClear conversation history before this request
includeHistoryNoInclude Cline conversation history for context

Implementation Reference

  • Entry point handler for 'generate_response' tool: validates arguments, creates asynchronous task, initiates background processing via processTask, and immediately returns task ID for polling.
    if (request.params.name === "generate_response") {
      if (!isValidGenerateResponseArgs(request.params.arguments)) {
        throw new McpError(
          ErrorCode.InvalidParams,
          "Invalid generate_response arguments"
        );
      }
    
      const taskId = uuidv4();
      const { prompt, showReasoning, clearContext, includeHistory } =
        request.params.arguments;
    
      // Initialize task status with les propriétés de suivi pour le polling
      this.activeTasks.set(taskId, {
        status: "pending",
        prompt,
        showReasoning,
        timestamp: Date.now(),
        lastChecked: Date.now(),
        nextCheckDelay: INITIAL_STATUS_CHECK_DELAY_MS,
        checkAttempts: 0
      });
    
      // Start processing in background
      this.processTask(taskId, clearContext, includeHistory).catch(
        (error) => {
          log("Error processing task:", error);
          this.activeTasks.set(taskId, {
            ...this.activeTasks.get(taskId)!,
            status: "error",
            error: error.message,
          });
        }
      );
    
      // Return task ID immediately
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify({ 
              taskId,
              suggestedWaitTime: Math.round(INITIAL_STATUS_CHECK_DELAY_MS / 1000)  // Temps suggéré en secondes
            }),
          },
        ],
      };
  • Core asynchronous handler that executes the generate_response logic: manages context, fetches conversation history, generates reasoning with DeepSeek, produces final response, updates task status throughout the process.
    private async processTask(
      taskId: string,
      clearContext?: boolean,
      includeHistory?: boolean
    ): Promise<void> {
      const task = this.activeTasks.get(taskId);
      if (!task) {
        throw new Error(`No task found with ID: ${taskId}`);
      }
    
      try {
        if (clearContext) {
          this.context.entries = [];
        }
    
        // Update status to reasoning
        this.activeTasks.set(taskId, {
          ...task,
          status: "reasoning",
        });
    
        // Get Cline conversation history if requested
        let history: ClaudeMessage[] | null = null;
        if (includeHistory !== false) {
          history = await findActiveConversation();
        }
    
        // Get DeepSeek reasoning with limited history
        const reasoningHistory = history
          ? formatHistoryForModel(history, true)
          : "";
        const reasoningPrompt = reasoningHistory
          ? `${reasoningHistory}\n\nNew question: ${task.prompt}`
          : task.prompt;
        const reasoning = await this.getDeepseekReasoning(reasoningPrompt);
    
        // Update status with reasoning
        this.activeTasks.set(taskId, {
          ...task,
          status: "responding",
          reasoning,
        });
    
        // Get final response with full history
        const responseHistory = history
          ? formatHistoryForModel(history, false)
          : "";
        const fullPrompt = responseHistory
          ? `${responseHistory}\n\nCurrent task: ${task.prompt}`
          : task.prompt;
        const response = await this.getFinalResponse(fullPrompt, reasoning);
    
        // Add to context after successful response
        this.addToContext({
          timestamp: Date.now(),
          prompt: task.prompt,
          reasoning,
          response,
          model: DEEPSEEK_MODEL, // Utiliser DEEPSEEK_MODEL au lieu de CLAUDE_MODEL
        });
    
        // Update status to complete
        this.activeTasks.set(taskId, {
          ...task,
          status: "complete",
          reasoning,
          response,
          timestamp: Date.now(),
        });
      } catch (error) {
        // Update status to error
        this.activeTasks.set(taskId, {
          ...task,
          status: "error",
          error: error instanceof Error ? error.message : "Unknown error",
          timestamp: Date.now(),
        });
        throw error;
      }
    }
  • Handler for generating step-by-step reasoning using DeepSeek model via OpenRouter API, incorporating conversation context.
    private async getDeepseekReasoning(prompt: string): Promise<string> {
      const contextPrompt =
        this.context.entries.length > 0
          ? `Previous conversation:\n${this.formatContextForPrompt()}\n\nNew question: ${prompt}`
          : prompt;
    
      try {
        // Ajouter instruction explicite pour que le modèle génère un raisonnement
        const requestPrompt = `Analyse la question suivante en détail avant de répondre. Réfléchis étape par étape et expose ton raisonnement complet.\n\n${contextPrompt}`;
    
        // Get reasoning from DeepSeek (sans le paramètre include_reasoning)
        const response = await this.openrouterClient.chat.completions.create({
          model: DEEPSEEK_MODEL,
          messages: [
            {
              role: "user",
              content: requestPrompt,
            },
          ],
          temperature: 0.7,
          top_p: 1,
        });
    
        // Utiliser directement le contenu de la réponse comme raisonnement
        if (
          !response.choices ||
          !response.choices[0] ||
          !response.choices[0].message ||
          !response.choices[0].message.content
        ) {
          throw new Error("Réponse vide de DeepSeek");
        }
    
        return response.choices[0].message.content;
      } catch (error) {
        log("Error in getDeepseekReasoning:", error);
        throw error;
      }
    }
  • Handler for generating the final polished response using DeepSeek model, based on the initial prompt and prior reasoning.
    private async getFinalResponse(
      prompt: string,
      reasoning: string
    ): Promise<string> {
      try {
        // Au lieu d'envoyer à Claude, on utilise DeepSeek pour la réponse finale aussi
        const response = await this.openrouterClient.chat.completions.create({
          model: DEEPSEEK_MODEL, // Utiliser DeepSeek ici
          messages: [
            {
              role: "user",
              content: `${prompt}\n\nVoici mon analyse préalable de cette question: ${reasoning}\nMaintenant, génère une réponse complète et détaillée basée sur cette analyse.`,
            },
          ],
          temperature: 0.7,
          top_p: 1,
        } as any);
    
        return (
          response.choices[0].message.content || "Error: No response content"
        );
      } catch (error) {
        log("Error in getFinalResponse:", error);
        throw error;
      }
    }
  • TypeScript interface defining the input schema for generate_response tool arguments.
    interface GenerateResponseArgs {
      prompt: string;
      showReasoning?: boolean;
      clearContext?: boolean;
      includeHistory?: boolean;
    }
  • src/index.ts:307-336 (registration)
    Tool registration in ListToolsRequestSchema handler, defining name, description, and JSON input schema for generate_response.
    {
      name: "generate_response",
      description:
        "Generate a response using DeepSeek's reasoning and Claude's response generation through OpenRouter.",
      inputSchema: {
        type: "object",
        properties: {
          prompt: {
            type: "string",
            description: "The user's input prompt",
          },
          showReasoning: {
            type: "boolean",
            description: "Whether to include reasoning in response",
            default: false,
          },
          clearContext: {
            type: "boolean",
            description: "Clear conversation history before this request",
            default: false,
          },
          includeHistory: {
            type: "boolean",
            description: "Include Cline conversation history for context",
            default: true,
          },
        },
        required: ["prompt"],
      },
    },
  • Runtime validator function for GenerateResponseArgs input schema used in the tool handler.
    const isValidGenerateResponseArgs = (args: any): args is GenerateResponseArgs =>
      typeof args === "object" &&
      args !== null &&
      typeof args.prompt === "string" &&
      (args.showReasoning === undefined ||
        typeof args.showReasoning === "boolean") &&
      (args.clearContext === undefined || typeof args.clearContext === "boolean") &&
      (args.includeHistory === undefined ||
        typeof args.includeHistory === "boolean");
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions using DeepSeek's reasoning and Claude's response generation, hinting at AI model integration, but fails to disclose critical traits like rate limits, authentication needs, response format, error handling, or cost implications. The description adds minimal behavioral context beyond the basic action, leaving significant gaps for a tool that likely involves external API calls.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's action and the technologies involved. It is front-loaded with the core purpose and avoids unnecessary details. However, it could be slightly more structured by explicitly mentioning the input or output, but overall it earns its place without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of AI model integration and the lack of annotations and output schema, the description is incomplete. It does not explain the return values, error cases, or how the response is formatted (e.g., text, JSON). For a tool with 4 parameters and no structured output information, the description should provide more context to guide effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters well-documented in the input schema. The description does not add any meaning beyond what the schema provides, such as explaining how 'showReasoning' interacts with DeepSeek's reasoning or clarifying the context management. With high schema coverage, the baseline score of 3 is appropriate, as the description offers no extra parameter insights.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'Generate[s] a response using DeepSeek's reasoning and Claude's response generation through OpenRouter,' which provides a clear verb ('Generate') and resource ('response') but lacks specificity about what kind of response or for what purpose. It distinguishes from the sibling tool 'check_response_status' by focusing on generation rather than status checking, but the purpose remains somewhat vague without context on the response type or domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives, such as other AI models or direct API calls. It mentions using DeepSeek and Claude via OpenRouter, but does not specify scenarios, prerequisites, or exclusions. Without explicit usage context, the agent must infer based on the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/niko91i/MCP-deepseek-V3-et-claude-desktop'

If you have feedback or need assistance with the MCP directory API, please join our Discord server