Skip to main content
Glama

verify_logic

Validate a claim by generating a structured audit of its reasoning trace, checking assumptions and evidence, and proposing patches for any logical defects.

Instructions

Generate a verification protocol for a reasoning trace.

    Args:
        claim: The headline answer or assertion to validate.
        reasoning_trace: The supporting chain-of-thought or proof steps.
        constraints: Optional guardrails (requirements, risk limits).

    Returns:
        Structured prompt that audits assumptions, inference steps, and
        evidence, then proposes patches for any defects.
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
claimYes
reasoning_traceYes
constraintsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for transparency. It discloses that the output is a 'structured prompt that audits assumptions, inference steps, and evidence, then proposes patches for any defects.' This explains the behavior well but lacks details on determinism, required permissions, or potential side effects (though none expected given the generative nature).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at approximately 5 lines, front-loads the purpose in the first sentence, and uses a clear docstring-style structure with 'Args' and 'Returns' sections. Every sentence adds value without redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, the description explains inputs and the nature of the output (a structured prompt for auditing and patching). Since an output schema exists, detailed return format is not required. However, it could include more context, such as expected input formats or limitations on trace length, but overall it is adequate for a straightforward generative tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must compensate. It explains each parameter: 'claim: The headline answer or assertion to validate,' 'reasoning_trace: The supporting chain-of-thought or proof steps,' and 'constraints: Optional guardrails (requirements, risk limits).' This adds significant meaning beyond the bare schema types, though it lacks format examples or precise constraints on input values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Generate a verification protocol for a reasoning trace.' This specifies the verb 'generate,' the resource 'verification protocol,' and the domain 'reasoning trace.' It also distinguishes from sibling tools like 'analyze_task_complexity' or 'backtracking' by focusing on verification rather than analysis or search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus alternatives. It lacks explicit context about prerequisites, recommended scenarios, or when not to use it. Sibling tools are not mentioned, so the agent must infer usage from the tool's purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/4rgon4ut/sutra'

If you have feedback or need assistance with the MCP directory API, please join our Discord server