Skip to main content
Glama

Ultrabrain Think

ultrabrain_think

Performs step-by-step reasoning for code engineering, enabling branching, revisions, quality checks, and bias detection within a structured thought chain.

Instructions

Canonical LCV reasoning gate for code work, branching, revisions, quality metrics, bias checks, confidence, and meta checkpoints.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
session_idNoOptional reasoning session id. Defaults to "default".
response_formatNoResponse format.
thoughtYesCurrent Ultrabrain reasoning step.
thought_numberYesCurrent thought number in the chain.
total_thoughtsYesEstimated total thoughts. Adjust this as scope changes.
next_thought_neededYesSet to false only when this chain has reached a verified conclusion.
step_typeNoReasoning step category.
modeNoReasoning mode.
is_revisionNoWhether this step revises an earlier thought.
revises_thoughtNoThought number being revised.
branch_from_thoughtNoThought number where this branch starts.
branch_idNoBranch identifier.
parent_thoughtNoOptional parent thought reference.
needs_more_thoughtsNoAllows thought_number to exceed total_thoughts when scope expands.
depth_levelNoCurrent depth for serial reasoning.
max_depthNoMaximum planned depth.
budget_modeNoReasoning budget mode.
budget_usedNoBudget used percentage from 0 to 100.
confidenceNoConfidence from 0 to 1.
meta_checkpointNoMarks an explicit meta-reasoning checkpoint.
bias_detectedNoKnown cognitive bias to track.
quality_metricsNoQuality scores from 0 to 5.
evidenceNoEvidence supporting this thought.
assumptionsNoAssumptions to track.
open_questionsNoUnresolved questions.
alternativesNoAlternative paths or options.
risksNoKnown risks.
next_actionsNoConcrete next checks or implementation actions.
tagsNoOptional tags.
perspectiveNoOptional perspective, such as reviewer, maintainer, security, UX, or operator.
expected_outputNoExpected output from the reasoning chain.
hypothesisNoExplicit hypothesis for this step.
verificationNoVerification approach or result.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description only says it is a 'reasoning gate' without explaining side effects, state changes, or what the tool does beyond its name. Annotations show it is not read-only, destructive, or idempotent, but no additional behavioral details are given to clarify its actual impact.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that packs many terms. It is concise but may be too terse and relies on jargon ('LCV'), reducing clarity for an agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 33 parameters, nested objects, and no output schema, the description is insufficient. It does not explain the overall workflow, return values, or how to chain reasoning steps, leaving the agent with many unknowns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are fully described in the schema. The description does not add new semantics beyond listing categories, which maps to some parameters but does not explain their usage or interactions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it is a reasoning gate for code work and lists covered aspects like branching, revisions, quality metrics. This makes the purpose fairly clear, but the acronym 'LCV' is unexplained and it does not explicitly differentiate from sibling tools like 'analyze' or 'review'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., analyze, branch). The description does not provide context for selection or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/LCV-Ideas-Software/ultrabrain-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server