Skip to main content
Glama

verify_logic

Audits reasoning traces and assumptions to validate claims, then proposes patches for detected defects in logical steps.

Instructions

Generate a verification protocol for a reasoning trace.

Args: claim: The headline answer or assertion to validate. reasoning_trace: The supporting chain-of-thought or proof steps. constraints: Optional guardrails (requirements, risk limits). Returns: Structured prompt that audits assumptions, inference steps, and evidence, then proposes patches for any defects.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
claimYes
reasoning_traceYes
constraintsNo

Implementation Reference

  • Core handler function for the 'verify_logic' tool. Decorated with @mcp.tool(), validates input using VerifyLogicInput schema, and returns a formatted protocol template for verifying reasoning logic.
    @mcp.tool() def verify_logic( claim: str, reasoning_trace: str, constraints: Optional[str] = None, ) -> str: """Generate a verification protocol for a reasoning trace. Args: claim: The headline answer or assertion to validate. reasoning_trace: The supporting chain-of-thought or proof steps. constraints: Optional guardrails (requirements, risk limits). Returns: Structured prompt that audits assumptions, inference steps, and evidence, then proposes patches for any defects. """ try: model = VerifyLogicInput( claim=claim, reasoning_trace=reasoning_trace, constraints=constraints ) except ValidationError as e: return f"Input Validation Error: {e}" normalized_constraints = model.constraints or "<none>" template = """ /reasoning.verify_logic{{ intent="Audit a reasoning trace for validity, completeness, and constraint alignment", input={{ claim="{claim}", reasoning_trace="{reasoning_trace}", constraints="{constraints}" }}, process=[ /premise_check{{action="List premises and mark which are stated vs. assumed"}}, /consistency{{action="Check each step for logical validity and missing links"}}, /evidence_map{{action="Match claims to evidence or note gaps"}}, /contra{{action="Search for contradictions or constraint violations"}}, /repair_plan{{action="Suggest minimal edits or extra steps to fix defects"}} ], output={{ verdict="pass|fail with one sentence rationale", defect_log="Numbered list of issues with locations in the trace", patched_plan="Revised steps or guardrails to repair the reasoning", confidence="0-1 score grounded in evidence coverage and consistency" }} }} """ return template.format( claim=model.claim, reasoning_trace=model.reasoning_trace, constraints=normalized_constraints, )
  • Pydantic BaseModel defining the input schema for the verify_logic tool, with fields for claim, reasoning_trace, and optional constraints.
    class VerifyLogicInput(BaseModel): claim: str = Field( ..., min_length=3, description="The headline answer or assertion to validate." ) reasoning_trace: str = Field( ..., min_length=10, description="The supporting chain-of-thought." ) constraints: Optional[str] = Field(None, description="Optional guardrails.")
  • Invocation of register_thinking_models(mcp) which applies @mcp.tool() decorators to register the verify_logic tool (and other thinking models) on the FastMCP server instance.
    register_thinking_models(mcp)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/4rgon4ut/sutra'

If you have feedback or need assistance with the MCP directory API, please join our Discord server