Skip to main content
Glama

think

Add structured thoughts to a cognitive graph using reasoning strategies like sequential, dialectic, parallel, analogical, or abductive approaches. Each thought becomes a node with confidence scoring and connections for deep analysis.

Instructions

Add a thought to the cognitive graph using the current strategy. Supports sequential, dialectic, parallel, analogical, and abductive reasoning strategies. Each thought becomes a node in a DAG with confidence scoring, edges, and metadata.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYesThe thought content
typeNoType of thought (default: analysis)
strategyNoReasoning strategy to use (default: current strategy from metacognition)
confidenceNoInitial confidence in this thought 0-1 (default: 0.5)
parentIdNoID of parent thought to connect to (default: last active leaf)
branchNoBranch name for parallel exploration (default: main)
tagsNoTags for categorizing this thought
edgeToNoCreate an explicit edge to another node
dialecticNoDialectic mode: provide thesis (and optionally antithesis/synthesis)
parallelNoParallel mode: multiple independent thoughts to explore simultaneously
analogicalNoAnalogical mode: source domain, mapping, and projected conclusion
abductiveNoAbductive mode: observation, explanations, and best explanation
knowledgeNoAttach external knowledge to this thought
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses that thoughts become nodes in a DAG with confidence scoring, edges, and metadata, which adds behavioral context beyond basic creation. However, it doesn't cover important aspects like whether this is a write operation, if it's idempotent, error conditions, or how the graph persists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences efficiently convey core functionality and key features. The first sentence states the main purpose, while the second adds important structural context about the DAG. No wasted words, though it could be slightly more front-loaded with the most critical information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 13 parameters, nested objects, and no output schema or annotations, the description provides adequate but incomplete context. It explains the cognitive graph concept and reasoning strategies but doesn't address return values, error handling, or how this integrates with sibling tools. The schema compensates for parameter documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 13 parameters thoroughly. The description mentions reasoning strategies that map to some parameters (strategy, dialectic, parallel, analogical, abductive) but doesn't add significant meaning beyond what the schema provides. Baseline 3 is appropriate when schema does heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Add') and resource ('thought to the cognitive graph') with specific context about reasoning strategies and node structure. It distinguishes from potential siblings like 'evaluate' or 'graph' by focusing on creation rather than assessment or visualization, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'using the current strategy' and mentions multiple reasoning strategies, suggesting when different approaches might apply. However, it lacks explicit guidance on when to choose this tool over siblings like 'evaluate' or 'metacog', or any prerequisites for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hubinoretros/deep-thinker'

If you have feedback or need assistance with the MCP directory API, please join our Discord server