get-deepseek-thinker
Generate and visualize focused reasoning content by processing user prompts through Deepseek's API or a local Ollama server, enabling structured thought processes for MCP-enabled AI clients.
Instructions
think with deepseek
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| originPrompt | Yes | user's original prompt |
Implementation Reference
- src/index.ts:123-151 (handler)Handler logic for the 'get-deepseek-thinker' tool within the CallToolRequestHandler. Validates input, selects completion method based on environment variable, and returns the tool result.if (name === "get-deepseek-thinker") { const { originPrompt } = GetDeepseekThinkerSchema.parse(args); if (!originPrompt) { return { content: [ { type: "text", text: "Please enter a prompt", }, ], }; } let result = ''; if(process?.env?.USE_OLLAMA){ result = await getOllamaCompletion(originPrompt); }else{ result = await getCompletion(originPrompt); } return { content: [ { type: "text", text: result, }, ], }; } else {
- src/index.ts:78-80 (schema)Zod schema for validating the input parameters of the 'get-deepseek-thinker' tool, requiring an 'originPrompt' string.const GetDeepseekThinkerSchema = z.object({ originPrompt: z.string(), });
- src/index.ts:100-113 (registration)Registration of the 'get-deepseek-thinker' tool in the ListTools response, including name, description, and input schema.{ name: "get-deepseek-thinker", description: "think with deepseek", inputSchema: { type: "object", properties: { originPrompt: { type: "string", description: "user's original prompt", }, }, required: ["originPrompt"], }, },
- src/index.ts:15-49 (helper)Helper function using OpenAI API to generate completion from DeepSeek model, extracting and returning the reasoning content.async function getCompletion(_prompt: string) { const openai = new OpenAI({ apiKey: process?.env?.API_KEY, baseURL: process?.env?.BASE_URL, }); try { const completion = await openai.chat.completions.create({ model: 'deepseek-r1', messages: [{ role: 'user', content: _prompt }], stream: true // Change to non-streaming output }); let reasoningContent = ''; // For collecting all reasoning_content // Handle streaming response for await (const chunk of completion) { if (chunk.choices) { // If there's reasoning_content, add it to the collector // @ts-ignore if (chunk.choices[0]?.delta?.reasoning_content) { // @ts-ignore reasoningContent += chunk.choices[0].delta.reasoning_content; } // When content is received, indicating reasoning is complete, return the collected content if (chunk.choices[0]?.delta?.content) { return "Answer with given reasoning process: " + reasoningContent; } } } // @ts-ignore return "Answer with given reasoning process: " + completion.choices[0]?.message?.reasoning_content; } catch (error) { return `Error: ${error}` } }
- src/index.ts:52-75 (helper)Helper function using Ollama to generate completion from DeepSeek model, extracting reasoning content between <think> tags.async function getOllamaCompletion(_prompt: string) { let reasoningContent = ''; // For collecting all reasoning_content try { const response = await ollama.generate({ model: 'deepseek-r1', prompt: _prompt, stream: true, }) for await (const part of response) { reasoningContent = reasoningContent + part.response; // Regex matching const regex = /<think>([\s\S]*?)<\/think>/i; // If contains <think> and </think>, collect thinkContent const thinkContent = reasoningContent.match(regex)?.[1] if (thinkContent) { ollama.abort(); return "Answer with given reasoning process: " + thinkContent } } return "Answer with given reasoning process: " + reasoningContent } catch (error) { return `Error: ${error}` } }