Skip to main content
Glama
HarshJ23

DeepSeek-Claude MCP Server

by HarshJ23

reason

Process queries using DeepSeek's R1 reasoning engine to generate structured analysis for Claude integration, handling complex multi-step reasoning tasks with precision.

Instructions

Process a query using DeepSeek's R1 reasoning engine and prepare it for integration with Claude.

DeepSeek R1 leverages advanced reasoning capabilities that naturally evolved from large-scale 
reinforcement learning, enabling sophisticated reasoning behaviors. The output is enclosed 
within `<ant_thinking>` tags to align with Claude's thought processing framework.

Args:
    query (dict): Contains the following keys:
        - context (str): Optional background information for the query.
        - question (str): The specific question to be analyzed.

Returns:
    str: The reasoning output from DeepSeek, formatted with `<ant_thinking>` tags for seamless use with Claude.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes

Implementation Reference

  • server.py:61-88 (handler)
    The handler function for the 'reason' tool. It processes the input query dictionary, fetches reasoning from DeepSeek API using a helper function, formats the output with <ant_thinking> tags, and handles errors.
    @mcp.tool()
    async def reason(query: dict) -> str:
        """
        Process a query using DeepSeek's R1 reasoning engine and prepare it for integration with Claude.
    
        DeepSeek R1 leverages advanced reasoning capabilities that naturally evolved from large-scale 
        reinforcement learning, enabling sophisticated reasoning behaviors. The output is enclosed 
        within `<ant_thinking>` tags to align with Claude's thought processing framework.
    
        Args:
            query (dict): Contains the following keys:
                - context (str): Optional background information for the query.
                - question (str): The specific question to be analyzed.
    
        Returns:
            str: The reasoning output from DeepSeek, formatted with `<ant_thinking>` tags for seamless use with Claude.
        """
        try:
            # Format the query from the input
            context = query.get("context", "")
            question = query.get("question", "")
            full_query = f"{context}\n{question}" if context else question
    
            reasoning = await get_deepseek_reasoning(full_query)
    
            return f"<ant_thinking>\n{reasoning}\n</ant_thinking>\n\nNow we should provide our final answer based on the above thinking."
        except Exception as e:
            return f"<reasoning_error>\nError: {str(e)}\n</reasoning_error>\n\nExplain the error."
  • Helper function that performs the actual API call to DeepSeek's reasoning model, streaming and collecting the 'reasoning_content' from the response chunks.
    async def get_deepseek_reasoning(query: str) -> str:
        """
        Get reasoning from the DeepSeek API.
    
        Args:
            query (str): The input query to process.
    
        Returns:
            str: The reasoning output from the API.
        """
        async with httpx.AsyncClient() as client:
            headers = {
                "Content-type": "application/json",
                "Authorization": f"Bearer {DEEPSEEK_API_KEY}",
            }
    
            payload_body = {
                "model": "deepseek-reasoner",
                "messages": [{"role": "user", "content": query}],
                "streaming": True,
                "max_tokens": 2048,
            }
    
            async with client.stream(
                "POST",
                f"{DEEPSEEK_API_BASE}/chat/completions",
                headers=headers,
                json=payload_body,
            ) as response:
                reasoning_data = []
                async for line in response.aiter_lines():
                    if line.startswith("data: "):
                        data = line[6:]
                        if data == "DONE":
                            continue
                        try:
                            chunk_data = json.loads(data)
                            if content := chunk_data.get("choices", [{}])[0].get("delta", {}).get("reasoning_content", ""):
                                reasoning_data.append(content)
                        except json.JSONDecodeError:
                            continue
    
                return " ".join(reasoning_data)
  • Docstring defining the input schema (query dict with 'context' and 'question' keys) and output format for the 'reason' tool.
    """
    Process a query using DeepSeek's R1 reasoning engine and prepare it for integration with Claude.
    
    DeepSeek R1 leverages advanced reasoning capabilities that naturally evolved from large-scale 
    reinforcement learning, enabling sophisticated reasoning behaviors. The output is enclosed 
    within `<ant_thinking>` tags to align with Claude's thought processing framework.
    
    Args:
        query (dict): Contains the following keys:
            - context (str): Optional background information for the query.
            - question (str): The specific question to be analyzed.
    
    Returns:
        str: The reasoning output from DeepSeek, formatted with `<ant_thinking>` tags for seamless use with Claude.
    """
  • server.py:61-61 (registration)
    The @mcp.tool() decorator registers the 'reason' function as an MCP tool.
    @mcp.tool()

Tool Definition Quality

Score is being calculated. Check back soon.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/HarshJ23/deepseek-claude-MCP-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server