get-deepseek-thinker
Generate reasoning content and thought process visualizations by interfacing with Deepseek's API or local Ollama server for focused analysis.
Instructions
think with deepseek
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| originPrompt | Yes | user's original prompt |
Implementation Reference
- src/index.ts:123-151 (handler)Executes the get-deepseek-thinker tool: validates input with schema, selects completion method based on environment variable USE_OLLAMA, calls the appropriate helper function, and returns the reasoning content.if (name === "get-deepseek-thinker") { const { originPrompt } = GetDeepseekThinkerSchema.parse(args); if (!originPrompt) { return { content: [ { type: "text", text: "Please enter a prompt", }, ], }; } let result = ''; if(process?.env?.USE_OLLAMA){ result = await getOllamaCompletion(originPrompt); }else{ result = await getCompletion(originPrompt); } return { content: [ { type: "text", text: result, }, ], }; } else {
- src/index.ts:78-80 (schema)Zod schema for validating the tool's input: requires originPrompt as string.const GetDeepseekThinkerSchema = z.object({ originPrompt: z.string(), });
- src/index.ts:97-116 (registration)Registers the get-deepseek-thinker tool in the ListToolsRequestHandler, providing name, description, and input schema.server.setRequestHandler(ListToolsRequestSchema, async () => { return { tools: [ { name: "get-deepseek-thinker", description: "think with deepseek", inputSchema: { type: "object", properties: { originPrompt: { type: "string", description: "user's original prompt", }, }, required: ["originPrompt"], }, }, ], }; });
- src/index.ts:15-49 (helper)Helper function using OpenAI API to get completion from deepseek-r1 model, collects reasoning_content from stream until content is produced.async function getCompletion(_prompt: string) { const openai = new OpenAI({ apiKey: process?.env?.API_KEY, baseURL: process?.env?.BASE_URL, }); try { const completion = await openai.chat.completions.create({ model: 'deepseek-r1', messages: [{ role: 'user', content: _prompt }], stream: true // Change to non-streaming output }); let reasoningContent = ''; // For collecting all reasoning_content // Handle streaming response for await (const chunk of completion) { if (chunk.choices) { // If there's reasoning_content, add it to the collector // @ts-ignore if (chunk.choices[0]?.delta?.reasoning_content) { // @ts-ignore reasoningContent += chunk.choices[0].delta.reasoning_content; } // When content is received, indicating reasoning is complete, return the collected content if (chunk.choices[0]?.delta?.content) { return "Answer with given reasoning process: " + reasoningContent; } } } // @ts-ignore return "Answer with given reasoning process: " + completion.choices[0]?.message?.reasoning_content; } catch (error) { return `Error: ${error}` } }
- src/index.ts:52-75 (helper)Alternative helper using Ollama to generate from deepseek-r1, collects response until <think> tags are matched, aborts stream then.async function getOllamaCompletion(_prompt: string) { let reasoningContent = ''; // For collecting all reasoning_content try { const response = await ollama.generate({ model: 'deepseek-r1', prompt: _prompt, stream: true, }) for await (const part of response) { reasoningContent = reasoningContent + part.response; // Regex matching const regex = /<think>([\s\S]*?)<\/think>/i; // If contains <think> and </think>, collect thinkContent const thinkContent = reasoningContent.match(regex)?.[1] if (thinkContent) { ollama.abort(); return "Answer with given reasoning process: " + thinkContent } } return "Answer with given reasoning process: " + reasoningContent } catch (error) { return `Error: ${error}` } }