Skip to main content
Glama

verify_logic

Audits reasoning traces and assumptions to validate claims, then proposes patches for detected defects in logical steps.

Instructions

Generate a verification protocol for a reasoning trace.

    Args:
        claim: The headline answer or assertion to validate.
        reasoning_trace: The supporting chain-of-thought or proof steps.
        constraints: Optional guardrails (requirements, risk limits).

    Returns:
        Structured prompt that audits assumptions, inference steps, and
        evidence, then proposes patches for any defects.
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
claimYes
reasoning_traceYes
constraintsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Core handler function for the 'verify_logic' tool. Decorated with @mcp.tool(), validates input using VerifyLogicInput schema, and returns a formatted protocol template for verifying reasoning logic.
        @mcp.tool()
        def verify_logic(
            claim: str,
            reasoning_trace: str,
            constraints: Optional[str] = None,
        ) -> str:
            """Generate a verification protocol for a reasoning trace.
    
            Args:
                claim: The headline answer or assertion to validate.
                reasoning_trace: The supporting chain-of-thought or proof steps.
                constraints: Optional guardrails (requirements, risk limits).
    
            Returns:
                Structured prompt that audits assumptions, inference steps, and
                evidence, then proposes patches for any defects.
            """
            try:
                model = VerifyLogicInput(
                    claim=claim, reasoning_trace=reasoning_trace, constraints=constraints
                )
            except ValidationError as e:
                return f"Input Validation Error: {e}"
    
            normalized_constraints = model.constraints or "<none>"
    
            template = """
    /reasoning.verify_logic{{
        intent="Audit a reasoning trace for validity, completeness, and constraint alignment",
        input={{
            claim="{claim}",
            reasoning_trace="{reasoning_trace}",
            constraints="{constraints}"
        }},
        process=[
            /premise_check{{action="List premises and mark which are stated vs. assumed"}},
            /consistency{{action="Check each step for logical validity and missing links"}},
            /evidence_map{{action="Match claims to evidence or note gaps"}},
            /contra{{action="Search for contradictions or constraint violations"}},
            /repair_plan{{action="Suggest minimal edits or extra steps to fix defects"}}
        ],
        output={{
            verdict="pass|fail with one sentence rationale",
            defect_log="Numbered list of issues with locations in the trace",
            patched_plan="Revised steps or guardrails to repair the reasoning",
            confidence="0-1 score grounded in evidence coverage and consistency"
        }}
    }}
    """
            return template.format(
                claim=model.claim,
                reasoning_trace=model.reasoning_trace,
                constraints=normalized_constraints,
            )
  • Pydantic BaseModel defining the input schema for the verify_logic tool, with fields for claim, reasoning_trace, and optional constraints.
    class VerifyLogicInput(BaseModel):
        claim: str = Field(
            ..., min_length=3, description="The headline answer or assertion to validate."
        )
        reasoning_trace: str = Field(
            ..., min_length=10, description="The supporting chain-of-thought."
        )
        constraints: Optional[str] = Field(None, description="Optional guardrails.")
  • Invocation of register_thinking_models(mcp) which applies @mcp.tool() decorators to register the verify_logic tool (and other thinking models) on the FastMCP server instance.
    register_thinking_models(mcp)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions generating a structured prompt for auditing assumptions, inference steps, and evidence, but lacks details on permissions, rate limits, side effects, or what happens with invalid inputs. For a tool with no annotation coverage, this is insufficient behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose. The parameter explanations are necessary given the lack of schema descriptions, though the formatting with bullet-like indentation could be slightly cleaner.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (validation/auditing function), no annotations, and the presence of an output schema (which covers return values), the description provides good coverage of purpose and parameters. However, it could better address behavioral aspects like error handling or performance characteristics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining all three parameters: 'claim' as the headline answer to validate, 'reasoning_trace' as supporting chain-of-thought, and 'constraints' as optional guardrails. This adds crucial meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('generate a verification protocol') and resources ('for a reasoning trace'). It distinguishes from siblings by focusing on validation/auditing of reasoning traces rather than analysis, design, or retrieval tasks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing to validate reasoning traces with claims and optional constraints, but doesn't explicitly state when to use this tool versus alternatives like 'analyze_task_complexity' or 'backtracking'. No exclusions or clear alternatives are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/4rgon4ut/sutra'

If you have feedback or need assistance with the MCP directory API, please join our Discord server