Skip to main content
Glama

verify_logic

Generate verification protocols to audit reasoning traces, validate claims, identify defects, and propose patches for logical validation.

Instructions

Generate a verification protocol for a reasoning trace.

Args: claim: The headline answer or assertion to validate. reasoning_trace: The supporting chain-of-thought or proof steps. constraints: Optional guardrails (requirements, risk limits). Returns: Structured prompt that audits assumptions, inference steps, and evidence, then proposes patches for any defects.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
claimYes
reasoning_traceYes
constraintsNo

Implementation Reference

  • The core handler function for the 'verify_logic' tool. It validates inputs using VerifyLogicInput schema, formats a structured reasoning verification protocol template, and returns it as a string prompt.
    @mcp.tool() def verify_logic( claim: str, reasoning_trace: str, constraints: Optional[str] = None, ) -> str: """Generate a verification protocol for a reasoning trace. Args: claim: The headline answer or assertion to validate. reasoning_trace: The supporting chain-of-thought or proof steps. constraints: Optional guardrails (requirements, risk limits). Returns: Structured prompt that audits assumptions, inference steps, and evidence, then proposes patches for any defects. """ try: model = VerifyLogicInput( claim=claim, reasoning_trace=reasoning_trace, constraints=constraints ) except ValidationError as e: return f"Input Validation Error: {e}" normalized_constraints = model.constraints or "<none>" template = """ /reasoning.verify_logic{{ intent="Audit a reasoning trace for validity, completeness, and constraint alignment", input={{ claim="{claim}", reasoning_trace="{reasoning_trace}", constraints="{constraints}" }}, process=[ /premise_check{{action="List premises and mark which are stated vs. assumed"}}, /consistency{{action="Check each step for logical validity and missing links"}}, /evidence_map{{action="Match claims to evidence or note gaps"}}, /contra{{action="Search for contradictions or constraint violations"}}, /repair_plan{{action="Suggest minimal edits or extra steps to fix defects"}} ], output={{ verdict="pass|fail with one sentence rationale", defect_log="Numbered list of issues with locations in the trace", patched_plan="Revised steps or guardrails to repair the reasoning", confidence="0-1 score grounded in evidence coverage and consistency" }} }} """ return template.format( claim=model.claim, reasoning_trace=model.reasoning_trace, constraints=normalized_constraints, )
  • Pydantic BaseModel defining the input schema for the verify_logic tool, including fields for claim, reasoning_trace, and optional constraints.
    class VerifyLogicInput(BaseModel): claim: str = Field( ..., min_length=3, description="The headline answer or assertion to validate." ) reasoning_trace: str = Field( ..., min_length=10, description="The supporting chain-of-thought." ) constraints: Optional[str] = Field(None, description="Optional guardrails.")
  • Invocation of register_thinking_models on the FastMCP instance, which defines and registers the verify_logic tool (via @mcp.tool() decorator inside the function).
    # Register cognitive tools register_thinking_models(mcp)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/4rgon4ut/sutra'

If you have feedback or need assistance with the MCP directory API, please join our Discord server