Skip to main content
Glama

reason

Analyze and process complex queries using DeepSeek's advanced reasoning engine, preparing outputs with <ant_thinking> tags for integration with Claude or DeepSeek V3 systems.

Instructions

Process a query using DeepSeek's R1 reasoning engine and prepare it for integration with DeepSeek V3 or claude.

DeepSeek R1 leverages advanced reasoning capabilities that naturally evolved from large-scale 
reinforcement learning, enabling sophisticated reasoning behaviors. The output is enclosed 
within `<ant_thinking>` tags to align with V3 or Claude's thought processing framework.

Args:
    query (dict): Contains the following keys:
        - context (str): Optional background information for the query.
        - question (str): The specific question to be analyzed.

Returns:
    str: The reasoning output from DeepSeek, formatted with `<ant_thinking>` tags for seamless use with V3 or Claude.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The main handler for the 'reason' tool. It takes a query dict with 'context' and 'question', fetches reasoning from DeepSeek R1 via get_infini_reasoning, and formats it into a structured <ant_thinking> block. Includes error handling.
    @mcp.tool()
    async def reason(query: dict) -> str:
        """
        Process a query using DeepSeek's R1 reasoning engine and prepare it for integration with DeepSeek V3 or claude.
    
        DeepSeek R1 leverages advanced reasoning capabilities that naturally evolved from large-scale 
        reinforcement learning, enabling sophisticated reasoning behaviors. The output is enclosed 
        within `<ant_thinking>` tags to align with V3 or Claude's thought processing framework.
    
        Args:
            query (dict): Contains the following keys:
                - context (str): Optional background information for the query.
                - question (str): The specific question to be analyzed.
    
        Returns:
            str: The reasoning output from DeepSeek, formatted with `<ant_thinking>` tags for seamless use with V3 or Claude.
        """
        try:
            # Format the query from the input
            context = query.get("context", "")
            question = query.get("question", "")
            full_query = f"{context}\n{question}" if context else question
    
            # Get the reasoning from DeepSeek
            reasoning = await get_infini_reasoning(full_query)
    
            # Structure the output for V3
            structured_reasoning = f"""<ant_thinking>
    [DEEPSEEK R1 INITIAL ANALYSIS]
    • First Principles: {r1_reasoning[:150]}
    • Component Breakdown: Decomposing the problem space...
    • Key Variables: Identifying critical factors...
    
    [DEEPSEEK R1 REASONING CHAIN]
    • Logical Framework: {r1_reasoning[150:300]}
    • Causal Relationships: Mapping dependencies...
    • Inference Patterns: Extracting reasoning structures...
    
    [DEEPSEEK R1 CRITICAL ANALYSIS]
    • Core Assumptions: {r1_reasoning[300:450]}
    • Edge Cases: Stress-testing the logic...
    • Uncertainty Assessment: Quantifying confidence levels...
    
    [DEEPSEEK R1 SYNTHESIS]
    • Primary Conclusions: {r1_reasoning[450:600]}
    • Confidence Metrics: Evaluating reasoning robustness...
    • Action Implications: Practical consequences...
    
    [DEEPSEEK R1 METACOGNITION]
    • Reasoning Quality: {r1_reasoning[600:]}
    • Bias Detection: Checking for systematic errors...
    • Knowledge Boundaries: Acknowledging limitations...
    </ant_thinking>
    
    Based on DeepSeek R1's comprehensive analysis, proceeding to formulate response...
            """
    
            return structured_reasoning
        except Exception as e:
            return f"""<reasoning_error>
    [DEEPSEEK R1 ERROR ANALYSIS]
    • Error Nature: {str(e)}
    • Processing Impact: Effects on reasoning pipeline
    • Recovery Options: Alternative reasoning paths
    • System Status: Current reasoning capabilities
    
    [MITIGATION STRATEGY]
    • Immediate Actions: Required interventions
    • Fallback Logic: Alternative reasoning approaches
    • Quality Assurance: Validation requirements
    </reasoning_error>
    
    Analyzing DeepSeek R1's error state and implications..."""
  • Helper function called by the 'reason' handler to fetch reasoning output from Infini AI's DeepSeek R1 model via streaming API, extracting 'reasoning_content' from deltas.
    async def get_infini_reasoning(query: str) -> str:
        """
        Get deepseek reasoning from the Infini API.
    
        DeepSeek R1 serves as our primary reasoning engine, leveraging its:
        - Advanced cognitive modeling
        - Multi-step reasoning capabilities
        - Emergent reasoning patterns
        - Robust logical analysis framework
        Args:
            query (str): The input query to process.
        Returns:
            str: The reasoning output from the API.
        """
        async with httpx.AsyncClient() as client:
            # print("starting to get infini reasoning")
            headers = {
                "Content-type": "application/json",
                "Authorization": f"Bearer {INFINI_API_KEY}",
                "Accept": "application/json, text/event-stream, */*",
            }
    
            payload_body = {
                "model": "deepseek-r1",
                "messages": [{
                    "role": "user", 
                    "content": query
    #                 "content": f"""[REASONING TASK]
    # Please analyze this query using your advanced reasoning capabilities:
    
    # CONTEXT & QUERY:
    # {query}
    
    # REQUIRED ANALYSIS STRUCTURE:
    # 1. Initial impressions and key components
    # 2. Logical relationships and dependencies
    # 3. Critical assumptions and implications
    # 4. Synthesis and confidence assessment
    
    # Please structure your response to cover all these aspects systematically.
    #                 """
                    }],
                "stream": True,
                "temperature": 0.6
            }
    
            async with client.stream(
                    "POST",
                    f"{INFINI_API_BASE}/chat/completions",
                    headers=headers,
                    json=payload_body,
                    timeout=INFINI_THINKING_TIMEOUT
            ) as response:
                reasoning_data = []
                async for line in response.aiter_lines():
                    print(f"line: {line}")
                    if line.startswith("data: "):
                        data = line[6:]
                        if data == "DONE":
                            continue
                        try:
                            chunk_data = json.loads(data)
                            if chunk_data and chunk_data.get("choices") and chunk_data["choices"][0].get("delta"):
                                delta = chunk_data.get("choices", [{}])[0].get("delta", {})
                                if content := delta.get("reasoning_content"):
                                    reasoning_data.append(content)
                                # else:
                                #     reasoning_data.append(delta.get("content").strip() if delta.get("content") else "")
                        except json.JSONDecodeError:
                            continue
                reasoning_content = "".join(reasoning_data)
                # print(f"reasoning_content: {reasoning_content}")
                return reasoning_content
  • server.py:96-96 (registration)
    Registration of the 'reason' tool using FastMCP decorator, which registers the following function as a tool named 'reason'.
    @mcp.tool()
  • Input schema described in the tool docstring: query dict with optional 'context' str and required 'question' str.
    """
    Process a query using DeepSeek's R1 reasoning engine and prepare it for integration with DeepSeek V3 or claude.
    
    DeepSeek R1 leverages advanced reasoning capabilities that naturally evolved from large-scale 
    reinforcement learning, enabling sophisticated reasoning behaviors. The output is enclosed 
    within `<ant_thinking>` tags to align with V3 or Claude's thought processing framework.
    
    Args:
        query (dict): Contains the following keys:
            - context (str): Optional background information for the query.
            - question (str): The specific question to be analyzed.
    
    Returns:
        str: The reasoning output from DeepSeek, formatted with `<ant_thinking>` tags for seamless use with V3 or Claude.
    """
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the tool 'leverages advanced reasoning capabilities' and outputs formatted text, but fails to disclose critical behavioral traits such as rate limits, error handling, authentication requirements, or performance characteristics. The description adds some context about the reasoning engine but leaves significant gaps for a tool with potential computational costs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose stated in the first sentence. Additional sentences provide useful context about the reasoning engine and output formatting. There is minor redundancy in mentioning 'V3 or Claude' twice, but overall, it's efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (1 parameter with nested structure), no annotations, and an output schema that exists (though not detailed here), the description is reasonably complete. It explains the purpose, parameter semantics, and output format, though it could improve by addressing behavioral aspects like error cases or integration specifics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds substantial meaning beyond the input schema, which has 0% description coverage and only specifies a generic object. It details that the 'query' parameter is a dict with 'context' (optional background) and 'question' (specific question) keys, clarifying the expected structure and semantics. This compensates well for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Process a query using DeepSeek's R1 reasoning engine and prepare it for integration with DeepSeek V3 or claude.' It specifies the verb ('process'), resource ('query'), and technology ('DeepSeek's R1 reasoning engine'), but since there are no sibling tools, it cannot demonstrate differentiation from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning integration with 'V3 or Claude's thought processing framework,' suggesting it's for preparing reasoning outputs for those systems. However, it lacks explicit guidance on when to use this tool versus alternatives (e.g., direct API calls or other reasoning engines) and does not specify prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/moyu6027/deepseek-MCP-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server