Skip to main content
Glama

backtracking

Generate structured prompts to rewind from failed steps, explore alternatives, and propose corrected plans for error recovery in reasoning tasks.

Instructions

Produce a recursive backtracking scaffold for error correction.

Args: objective: Overall goal to satisfy. failed_step: The step or subgoal that failed. trace: Optional reasoning trace leading to the failure. constraints: Guardrails or requirements to respect. Returns: Structured prompt that rewinds to last stable state, explores alternatives, and proposes a patched plan.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
objectiveYes
failed_stepYes
traceNo
constraintsNo

Implementation Reference

  • The core handler function for the 'backtracking' tool. It validates inputs using the BacktrackingInput schema and generates a structured '/reasoning.backtracking' protocol template for recursive error correction and alternative exploration.
    @mcp.tool() def backtracking( objective: str, failed_step: str, trace: Optional[str] = None, constraints: Optional[str] = None, ) -> str: """Produce a recursive backtracking scaffold for error correction. Args: objective: Overall goal to satisfy. failed_step: The step or subgoal that failed. trace: Optional reasoning trace leading to the failure. constraints: Guardrails or requirements to respect. Returns: Structured prompt that rewinds to last stable state, explores alternatives, and proposes a patched plan. """ try: model = BacktrackingInput( objective=objective, failed_step=failed_step, trace=trace, constraints=constraints, ) except ValidationError as e: return f"Input Validation Error: {e}" normalized_trace = model.trace or "<none>" normalized_constraints = model.constraints or "<none>" template = """ /reasoning.backtracking{{ intent="Recover from failure by stepping back, exploring alternatives, and re-planning", input={{ objective="{objective}", failed_step="{failed_step}", trace="{trace}", constraints="{constraints}" }}, process=[ /locate_break{{action="Identify point of failure and prior valid state"}}, /hypothesize{{action="List alternative branches with pros/cons"}}, /test_branch{{action="Mentally simulate top alternatives against constraints"}}, /select{{action="Choose next branch with rationale"}}, /plan_forward{{action="Lay out next steps with checkpoints"}} ], output={{ recovery_plan="Steps to proceed from stable state", branch_rationale="Why this branch was chosen", risks="Remaining risks or unknowns", checkpoints="Where to re-verify along the way" }} }} """ return template.format( objective=model.objective, failed_step=model.failed_step, trace=normalized_trace, constraints=normalized_constraints, )
  • Pydantic BaseModel defining the input schema for the backtracking tool, including fields for objective, failed_step, trace, and constraints.
    class BacktrackingInput(BaseModel): objective: str = Field(..., min_length=3, description="Overall goal to satisfy.") failed_step: str = Field( ..., min_length=3, description="The step or subgoal that failed." ) trace: Optional[str] = Field( None, description="Optional reasoning trace leading to the failure." ) constraints: Optional[str] = Field(None, description="Guardrails or requirements.")
  • Registration of the thinking models, including the backtracking tool, onto the main FastMCP server instance by calling register_thinking_models(mcp).
    mcp = FastMCP("Context Engineering MCP") # Register cognitive tools register_thinking_models(mcp)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/4rgon4ut/sutra'

If you have feedback or need assistance with the MCP directory API, please join our Discord server