Skip to main content
Glama

reason

Analyze complex problems using five reasoning engines: iterative bounded reasoning, multi-perspective analysis, chain compression, evolutionary search, or structured decomposition. Auto-selects or manually choose the optimal method.

Instructions

[REASONING] 5 engines: inftythink (iterative bounded reasoning), coconut (multi-perspective latent analysis), extracot (reasoning chain compression), mindevolution (evolutionary search), kagthinker (structured logical decomposition with dependency DAG). Auto-selects based on params or use 'method' to override.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sessionIdNoSession identifierdefault
methodNoOverride: run a specific reasoning method. If omitted, auto-selects based on params. inftythink — iterative bounded reasoning (default for raw problems); coconut — multi-perspective latent-space analysis; extracot — compress existing reasoning steps; mindevolution — evolutionary search over seed solutions; kagthinker — structured logical decomposition with dependency graph
paramsNoParameters for the underlying reasoning engine. inftythink: {problem, priorContext?, maxSegments?, maxSegmentTokens?, summaryRatio?}; coconut: {problem, maxSteps?, breadth?, enableBreadthExploration?}; extracot: {reasoningSteps[], problem?, maxBudget?, targetCompression?, minFidelity?}; mindevolution: {problem, criteria?, populationSize?, maxGenerations?, seedResponses[]}; kagthinker: {problem, knownFacts?, maxDepth?, maxSteps?}
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the existence of 5 reasoning engines and auto-selection behavior, but doesn't describe performance characteristics, rate limits, authentication needs, or what constitutes successful/unsuccessful execution. The description adds some behavioral context but leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences that efficiently convey the core functionality. The first sentence lists all engines, and the second explains the selection mechanism. No redundant information is present, though the engine names could be better integrated with their descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 3 parameters (including a nested object), no annotations, and no output schema, the description is incomplete. It doesn't explain what the tool returns, error conditions, or provide examples of typical use cases. The parameter descriptions in the schema help, but the description alone leaves significant gaps for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the auto-selection logic ('Auto-selects based on params') and providing high-level descriptions of each method option, which complements the schema's technical enum values. However, it doesn't elaborate on how params influence auto-selection beyond what's implied.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides reasoning with 5 different engines and auto-selection capability. It specifies the verb 'reasoning' and resource 'engines', but doesn't distinguish this from sibling tools like 'context_loop' or 'truthcheck' which might also involve reasoning processes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the mention of auto-selection based on params and method override, but doesn't explicitly state when to use this tool versus alternatives like 'context_loop' or 'truthcheck'. No specific exclusions or comparison to sibling tools is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/XJTLUmedia/Context-First-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server