Skip to main content
Glama
bsmi021

Chain of Draft Thinking

by bsmi021

chain-of-draft

Improve problem-solving with structured, iterative reasoning. Chain of Draft enables systematic critique and revision of complex analysis, ensuring clarity, accuracy, and robustness in conclusions. Ideal for tasks requiring multi-step analysis, logical consistency, and error correction.

Instructions

# Chain of Draft (CoD): Systematic Reasoning Tool

⚠️ REQUIRED PARAMETERS - ALL MUST BE PROVIDED:

  1. reasoning_chain: string[] - At least one reasoning step

  2. next_step_needed: boolean - Whether another iteration is needed

  3. draft_number: number - Current draft number (≥ 1)

  4. total_drafts: number - Total planned drafts (≥ draft_number)

Optional parameters only required based on context:

  • is_critique?: boolean - If true, critique_focus is required

  • critique_focus?: string - Required when is_critique=true

  • revision_instructions?: string - Recommended for revision steps

  • step_to_review?: number - Specific step index to review

  • is_final_draft?: boolean - Marks final iteration

    Purpose:

    Enhances problem-solving through structured, iterative critique and revision.

    Chain of Draft is an advanced reasoning tool that enhances problem-solving through structured, iterative critique and revision. Unlike traditional reasoning approaches, CoD mimics the human drafting process to improve clarity, accuracy, and robustness of conclusions.

    When to Use This Tool:

    • Complex Problem-Solving: Tasks requiring detailed, multi-step analysis with high accuracy demands

    • Critical Reasoning: Problems where logical flow and consistency are essential

    • Error-Prone Scenarios: Questions where initial reasoning might contain mistakes or oversight

    • Multi-Perspective Analysis: Cases benefiting from examining a problem from different angles

    • Self-Correction Needs: When validation and refinement of initial thoughts are crucial

    • Detailed Solutions: Tasks requiring comprehensive explanations with supporting evidence

    • Mathematical or Logical Puzzles: Problems with potential for calculation errors or logical gaps

    • Nuanced Analysis: Situations with subtle distinctions that might be missed in a single pass

    Key Capabilities:

    • Iterative Improvement: Systematically refines reasoning through multiple drafts

    • Self-Critique: Critically examines previous reasoning to identify flaws and opportunities

    • Focused Revision: Targets specific aspects of reasoning in each iteration

    • Perspective Flexibility: Can adopt different analytical viewpoints during critique

    • Progressive Refinement: Builds toward optimal solutions through controlled iterations

    • Context Preservation: Maintains understanding across multiple drafts and revisions

    • Adaptable Depth: Adjusts the number of iterations based on problem complexity

    • Targeted Improvements: Addresses specific weaknesses in each revision cycle

    Parameters Explained:

    • reasoning_chain: Array of strings representing your current reasoning steps. Each element should contain a clear, complete thought that contributes to the overall analysis.

    • next_step_needed: Boolean flag indicating whether additional critique or revision is required. Set to true until the final, refined reasoning chain is complete.

    • draft_number: Integer tracking the current iteration (starting from 1). Increments with each critique or revision.

    • total_drafts: Estimated number of drafts needed for completion. This can be adjusted as the solution evolves.

    • is_critique: Boolean indicating the current mode:

      • true = Evaluating previous reasoning

      • false = Implementing revisions

    • critique_focus: (Required when is_critique=true) Specific aspect being evaluated, such as:

      • "logical_consistency": Checking for contradictions or flaws in reasoning

      • "factual_accuracy": Verifying correctness of facts and calculations

      • "completeness": Ensuring all relevant aspects are considered

      • "clarity": Evaluating how understandable the reasoning is

      • "relevance": Assessing if reasoning directly addresses the problem

    • revision_instructions: (Required when is_critique=false) Detailed guidance for improving the reasoning based on the preceding critique.

    • step_to_review: (Optional) Zero-based index of the specific reasoning step being critiqued or revised. When omitted, applies to the entire chain.

    • is_final_draft: (Optional) Boolean indicating whether this is the final iteration of reasoning.

    Best Practice Workflow:

    1. Start with Initial Draft: Begin with your first-pass reasoning and set a reasonable total_drafts (typically 3-5).

    2. Alternate Critique and Revision: Use is_critique=true to evaluate reasoning, then is_critique=false to implement improvements.

    3. Focus Each Critique: Choose a specific critique_focus for each evaluation cycle rather than attempting to address everything at once.

    4. Provide Detailed Revision Guidance: Include specific, actionable revision_instructions based on each critique.

    5. Target Specific Steps When Needed: Use step_to_review to focus on particular reasoning steps that need improvement.

    6. Adjust Total Drafts As Needed: Modify total_drafts based on problem complexity and progress.

    7. Mark Completion Appropriately: Set next_step_needed=false only when the reasoning chain is complete and satisfactory.

    8. Aim for Progressive Improvement: Each iteration should measurably improve the reasoning quality.

    Example Application:

    • Initial Draft: First-pass reasoning about a complex problem

    • Critique #1: Focus on logical consistency and identify contradictions

    • Revision #1: Address logical flaws found in the critique

    • Critique #2: Focus on completeness and identify missing considerations

    • Revision #2: Incorporate overlooked aspects and strengthen reasoning

    • Final Critique: Holistic review of clarity and relevance

    • Final Revision: Refine presentation and ensure direct addressing of the problem

    Chain of Draft is particularly effective when complex reasoning must be broken down into clear steps, analyzed from multiple perspectives, and refined through systematic critique. By mimicking the human drafting process, it produces more robust and accurate reasoning than single-pass approaches.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
critique_focusNoThe specific aspect or dimension being critiqued in the current evaluation (e.g., 'logical_consistency', 'factual_accuracy', 'completeness', 'clarity', 'relevance'). Required when is_critique is true.
draft_numberYesCurrent draft number in the iteration sequence (must be >= 1). Increments with each new critique or revision.
is_critiqueNoBoolean flag indicating whether the current step is a critique phase (true) evaluating previous reasoning, or a revision phase (false) implementing improvements.
is_final_draftNoBoolean flag indicating whether this is the final draft in the reasoning process. Helps signal the completion of the iterative refinement.
new_reasoning_stepsYesNew reasoning steps to add to the chain
next_step_neededYesBoolean flag indicating whether another critique or revision cycle is needed in the reasoning chain. Set to false only when the final, satisfactory conclusion has been reached.
reasoning_chainYesArray of strings representing the current chain of reasoning steps. Each step should be a clear, complete thought that contributes to the overall analysis or solution.
revision_instructionsNoDetailed, actionable guidance for how to revise the reasoning based on the preceding critique. Should directly address issues identified in the critique. Required when is_critique is false.
step_to_reviewNoZero-based index of the specific reasoning step being targeted for critique or revision. When omitted, the critique or revision applies to the entire reasoning chain.
total_draftsYesEstimated total number of drafts needed to reach a complete solution (must be >= draft_number). Can be adjusted as the solution evolves.

Implementation Reference

  • The core handler for the 'chain-of-draft' tool, registered via server.tool(). It processes input arguments, manages session state with reasoning chains, validates data, and returns a JSON-formatted response with draft details.
    server.tool(
        TOOL_NAME,
        TOOL_DESCRIPTION,
        TOOL_SCHEMA,
        async (args, extra) => {
            const sessionId = extra.sessionId || 'default';
            return {
                content: [{
                    type: "text" as const,
                    text: JSON.stringify(await processThoughtRequest(args, sessionId))
                }]
            };
        }
    );
  • Zod-based input schema defining all parameters for the chain-of-draft tool, including required fields like reasoning_chain, draft_number, and optional critique/revision fields.
    export const TOOL_SCHEMA = {
        reasoning_chain: z.array(z.string().min(1, "Reasoning steps cannot be empty"))
            .min(1, "At least one reasoning step is required")
            .describe(TOOL_PARAM_DESCRIPTIONS.reasoning_chain),
    
        next_step_needed: z.boolean()
            .describe(TOOL_PARAM_DESCRIPTIONS.next_step_needed),
    
        draft_number: z.number()
            .min(1, "Draft number must be at least 1")
            .describe(TOOL_PARAM_DESCRIPTIONS.draft_number),
    
        total_drafts: z.number()
            .min(1, "Total drafts must be at least 1")
            .describe(TOOL_PARAM_DESCRIPTIONS.total_drafts),
    
        is_critique: z.boolean()
            .optional()
            .describe(TOOL_PARAM_DESCRIPTIONS.is_critique),
    
        critique_focus: z.string()
            .min(1, "Critique focus cannot be empty")
            .optional()
            .describe(TOOL_PARAM_DESCRIPTIONS.critique_focus),
    
        revision_instructions: z.string()
            .min(1, "Revision instructions cannot be empty")
            .optional()
            .describe(TOOL_PARAM_DESCRIPTIONS.revision_instructions),
    
        step_to_review: z.number()
            .min(0, "Step index must be non-negative")
            .optional()
            .describe(TOOL_PARAM_DESCRIPTIONS.step_to_review),
    
        is_final_draft: z.boolean()
            .optional()
            .describe(TOOL_PARAM_DESCRIPTIONS.is_final_draft),
    
        new_reasoning_steps: z.array(z.string().min(1, "Reasoning steps cannot be empty"))
            .min(1, "At least one new reasoning step is required")
            .describe("New reasoning steps to add to the chain"),
    };
  • src/tools/index.ts:5-7 (registration)
    Tool registration entrypoint called from initialize.ts, which invokes chainOfDraftTool to register the 'chain-of-draft' tool on the MCP server.
    export const registerTools = (server: McpServer) => {
        chainOfDraftTool(server);
    }
  • Constant defining the exact tool name 'chain-of-draft'.
    export const TOOL_NAME = "chain-of-draft";
  • Core processing function for tool logic: validates input, manages session state (history and branches), handles draft progression, and formats the response.
    const processThoughtRequest = async (input: unknown, sessionId: string) => {
        try {
            const session = getOrCreateSession(sessionId);
            const validatedInput = validateThoughtData(input);
            if (!validatedInput) {
                throw new McpError(ErrorCode.InvalidParams, "Invalid thought data");
            }
    
            // Validate draft progression
            if (validatedInput.draft_number > validatedInput.total_drafts) {
                validatedInput.total_drafts = validatedInput.draft_number;
            }
    
            // Store the thought in session history
            session.thoughtHistory.push(validatedInput);
    
            // Handle branching if specified
            if (validatedInput.step_to_review !== undefined) {
                const branchId = `branch_${validatedInput.draft_number}_${validatedInput.step_to_review}`;
                if (!session.branches[branchId]) {
                    session.branches[branchId] = [];
                }
                session.branches[branchId].push(validatedInput);
            }
    
            // Format response
            return {
                content: [{
                    type: "text",
                    text: JSON.stringify({
                        draftNumber: validatedInput.draft_number,
                        totalDrafts: validatedInput.total_drafts,
                        nextStepNeeded: validatedInput.next_step_needed,
                        isCritique: validatedInput.is_critique,
                        critiqueFocus: validatedInput.critique_focus,
                        revisionInstructions: validatedInput.revision_instructions,
                        stepToReview: validatedInput.step_to_review,
                        isFinalDraft: validatedInput.is_final_draft,
                        branches: Object.keys(session.branches),
                        thoughtHistoryLength: session.thoughtHistory.length
                    }, null, 2)
                }]
            };
        } catch (error) {
            if (error instanceof McpError) {
                throw error;
            }
            throw new McpError(
                ErrorCode.InternalError,
                error instanceof Error ? error.message : String(error)
            );
        }
    };
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does an excellent job explaining the tool's iterative nature, critique/revision modes, parameter dependencies, and workflow patterns. However, it doesn't explicitly mention potential limitations like computational cost or time requirements for multiple iterations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is excessively long (over 800 words) with redundant sections. While well-structured with headings, it repeats information (e.g., parameter explanations appear in both the initial list and a dedicated section) and includes unnecessary elaboration that doesn't add proportional value for tool selection and invocation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex 10-parameter tool with no annotations or output schema, the description provides substantial context about workflow, usage scenarios, and behavioral patterns. It adequately compensates for the lack of structured metadata, though the absence of output information (what the tool returns) is a minor gap given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description's 'Parameters Explained' section adds some contextual meaning (e.g., explaining what different critique_focus values represent), but mostly restates what's already in the schema descriptions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose as 'enhances problem-solving through structured, iterative critique and revision' and provides a detailed explanation of how it works. It clearly distinguishes this as an 'advanced reasoning tool' that 'mimics the human drafting process' for improving reasoning quality, which is specific and actionable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a dedicated 'When to Use This Tool' section with 8 specific scenarios (e.g., 'Complex Problem-Solving', 'Critical Reasoning', 'Error-Prone Scenarios'), plus a 'Best Practice Workflow' with 8 steps and an 'Example Application' section. This provides comprehensive guidance on when and how to use the tool effectively.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bsmi021/mcp-chain-of-draft-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server