verify_logic
Generate verification protocols to audit reasoning traces, validate claims, identify defects, and propose patches for logical validation.
Instructions
Generate a verification protocol for a reasoning trace.
Args:
claim: The headline answer or assertion to validate.
reasoning_trace: The supporting chain-of-thought or proof steps.
constraints: Optional guardrails (requirements, risk limits).
Returns:
Structured prompt that audits assumptions, inference steps, and
evidence, then proposes patches for any defects.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | ||
| reasoning_trace | Yes | ||
| constraints | No |
Implementation Reference
- The core handler function for the 'verify_logic' tool. It validates inputs using VerifyLogicInput schema, formats a structured reasoning verification protocol template, and returns it as a string prompt.@mcp.tool() def verify_logic( claim: str, reasoning_trace: str, constraints: Optional[str] = None, ) -> str: """Generate a verification protocol for a reasoning trace. Args: claim: The headline answer or assertion to validate. reasoning_trace: The supporting chain-of-thought or proof steps. constraints: Optional guardrails (requirements, risk limits). Returns: Structured prompt that audits assumptions, inference steps, and evidence, then proposes patches for any defects. """ try: model = VerifyLogicInput( claim=claim, reasoning_trace=reasoning_trace, constraints=constraints ) except ValidationError as e: return f"Input Validation Error: {e}" normalized_constraints = model.constraints or "<none>" template = """ /reasoning.verify_logic{{ intent="Audit a reasoning trace for validity, completeness, and constraint alignment", input={{ claim="{claim}", reasoning_trace="{reasoning_trace}", constraints="{constraints}" }}, process=[ /premise_check{{action="List premises and mark which are stated vs. assumed"}}, /consistency{{action="Check each step for logical validity and missing links"}}, /evidence_map{{action="Match claims to evidence or note gaps"}}, /contra{{action="Search for contradictions or constraint violations"}}, /repair_plan{{action="Suggest minimal edits or extra steps to fix defects"}} ], output={{ verdict="pass|fail with one sentence rationale", defect_log="Numbered list of issues with locations in the trace", patched_plan="Revised steps or guardrails to repair the reasoning", confidence="0-1 score grounded in evidence coverage and consistency" }} }} """ return template.format( claim=model.claim, reasoning_trace=model.reasoning_trace, constraints=normalized_constraints, )
- Pydantic BaseModel defining the input schema for the verify_logic tool, including fields for claim, reasoning_trace, and optional constraints.class VerifyLogicInput(BaseModel): claim: str = Field( ..., min_length=3, description="The headline answer or assertion to validate." ) reasoning_trace: str = Field( ..., min_length=10, description="The supporting chain-of-thought." ) constraints: Optional[str] = Field(None, description="Optional guardrails.")
- src/context_engineering_mcp/server.py:21-22 (registration)Invocation of register_thinking_models on the FastMCP instance, which defines and registers the verify_logic tool (via @mcp.tool() decorator inside the function).# Register cognitive tools register_thinking_models(mcp)