Skip to main content
Glama

think

Structure complex reasoning processes, log thoughts, and analyze problems step-by-step. Use for problem definition, analysis, and self-reflection without altering data.

Instructions

Use the tool to think about something. It will not obtain new information or change the database, but just append the thought to the log. Use it when complex reasoning or some cache memory is needed. Consider including: problem definition, relevant context, analysis steps, self-reflection on your reasoning, and conclusions. Adapt this structure as needed for your specific thought process.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
allowResearchNoWhether to allow research via external tools during the reasoning process
associateWithEntityNoOptional entity name to associate this thought with
categoryNoOptional category for the thought (e.g., "problem-solving", "analysis", "planning")
contextNoOptional context or situation relevant to this thought (e.g., project, meeting, or scenario)
currentStepNoThe current step number in the thinking process
formatOutputNoWhether to apply markdown formatting to the output
formatTypeNoThe type of formatting to applyauto
plannedStepsNoThe total number of steps planned for this thinking process
reflectPromptNoCustom prompt for the self-reflection stage
researchQueryNoOptional research query to execute during the reasoning process
selfReflectNoWhether to perform a self-reflection pass after generating the answer
storeInMemoryNoWhether to store this thought in the knowledge graph memory
structuredReasoningYesA structured thought process to work through complex problems. Use this as a dedicated space for reasoning step-by-step.
tagsNoOptional tags to help categorize and find this thought later

Implementation Reference

  • The execute handler for the 'think' tool. It implements step counting logic, estimates planned steps based on content length if not provided, increments the current step, and formats the output with step information.
    execute: async (params: any) => {
      const { structuredReasoning, selfReflect = false } = params;
      
      // Step counter logic
      // Initialize or estimate plannedSteps if not provided
      if (!params.plannedSteps) {
        // Roughly estimate based on content length
        const contentLength = structuredReasoning.length;
        params.plannedSteps = Math.max(1, Math.min(5, Math.ceil(contentLength / 300)));
      }
      
      // Initialize currentStep if not provided
      if (!params.currentStep) {
        params.currentStep = 1;
      } else {
        // Increment the current step
        params.currentStep += 1;
      }
      
      // Ensure current step doesn't exceed planned steps
      params.currentStep = Math.min(params.currentStep, params.plannedSteps);
      
      // Format output with step counter
      return `# Structured Reasoning (Step ${params.currentStep} of ${params.plannedSteps})\n\n${structuredReasoning}\n\n(Step ${params.currentStep} of ${params.plannedSteps})`;
    }
  • The server.addTool call that registers the 'think' tool, specifying name, description, parameters schema, and execute handler.
    server.addTool({
      name: "think",
      description: "Use the tool to think about something. It will not obtain new information or change the database, but just append the thought to the log. Use it when complex reasoning or some cache memory is needed. Consider including: problem definition, relevant context, analysis steps, self-reflection on your reasoning, and conclusions. Adapt this structure as needed for your specific thought process.",
      parameters: ExtendedThinkSchema,
      execute: async (params: any) => {
        const { structuredReasoning, selfReflect = false } = params;
        
        // Step counter logic
        // Initialize or estimate plannedSteps if not provided
        if (!params.plannedSteps) {
          // Roughly estimate based on content length
          const contentLength = structuredReasoning.length;
          params.plannedSteps = Math.max(1, Math.min(5, Math.ceil(contentLength / 300)));
        }
        
        // Initialize currentStep if not provided
        if (!params.currentStep) {
          params.currentStep = 1;
        } else {
          // Increment the current step
          params.currentStep += 1;
        }
        
        // Ensure current step doesn't exceed planned steps
        params.currentStep = Math.min(params.currentStep, params.plannedSteps);
        
        // Format output with step counter
        return `# Structured Reasoning (Step ${params.currentStep} of ${params.plannedSteps})\n\n${structuredReasoning}\n\n(Step ${params.currentStep} of ${params.plannedSteps})`;
      }
    });
  • The base ThinkSchema defining the input parameters for the 'think' tool using Zod validation.
    export const ThinkSchema = z.object({
      structuredReasoning: z.string()
        .min(10, 'Reasoning must be at least 10 characters long')
        .describe('A structured thought process to work through complex problems. Use this as a dedicated space for reasoning step-by-step.'),
      
      // Optional memory parameters - can be added in future to associate thoughts with specific contexts
      associateWithEntity: z.string().optional()
        .describe('Optional entity name to associate this thought with'),
      
      category: z.string().optional()
        .describe('Optional category for the thought (e.g., "problem-solving", "analysis", "planning")'),
      
      tags: z.array(z.string()).optional()
        .describe('Optional tags to help categorize and find this thought later'),
      
      storeInMemory: z.boolean().optional()
        .default(false)
        .describe('Whether to store this thought in the knowledge graph memory'),
      
      context: z.string().optional()
        .describe('Optional context or situation relevant to this thought (e.g., project, meeting, or scenario)'),
    }); 
  • ExtendedThinkSchema extends the base schema with additional fields for step counting, self-reflection, research, and formatting options. This is the schema used in the 'think' tool parameters.
    export const ExtendedThinkSchema = BaseThinkSchema.extend({
      plannedSteps: z.number().optional().describe('The total number of steps planned for this thinking process'),
      currentStep: z.number().optional().describe('The current step number in the thinking process'),
      selfReflect: z.boolean().optional().default(false).describe('Whether to perform a self-reflection pass after generating the answer'),
      allowResearch: z.boolean().optional().default(false).describe('Whether to allow research via external tools during the reasoning process'),
      reflectPrompt: z.string().optional().describe('Custom prompt for the self-reflection stage'),
      researchQuery: z.string().optional().describe('Optional research query to execute during the reasoning process'),
      formatOutput: z.boolean().optional().default(true).describe('Whether to apply markdown formatting to the output'),
      formatType: z.enum(['auto', 'general', 'problem', 'comparison']).optional().default('auto').describe('The type of formatting to apply')
    });
  • Registration call to registerThinkTools(server), which adds the 'think' tool to the MCP server.
    // Register think tools
    console.error('[INFO] [tools] Registering think tools...');
    registerThinkTools(server);
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly states the tool 'will not obtain new information or change the database, but just append the thought to the log,' which covers read-only and non-destructive behavior. It also mentions memory/cache functionality and provides guidance on thought structure. However, it doesn't address potential limitations like rate limits, authentication needs, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized but could be more front-loaded. The first sentence clearly states the purpose, but the second sentence contains important behavioral information that should be more prominent. The guidance on thought structure is helpful but could be more concise. Overall, it's adequate but not optimally structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 14 parameters and no output schema, the description provides good context about the tool's purpose, behavioral characteristics, and usage patterns. It covers the key aspects of what the tool does and when to use it. However, without annotations or output schema, it could benefit from more explicit information about return values or error handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 14 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. It only mentions general structural elements like 'problem definition' and 'analysis steps' which loosely map to the structuredReasoning parameter. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Use the tool to think about something' and specifies it 'will not obtain new information or change the database, but just append the thought to the log.' This distinguishes it from research tools like exa_search and database mutation tools like upsert_entities. However, it doesn't explicitly differentiate from other reasoning tools like plan_tasks or memory_query.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context: 'Use it when complex reasoning or some cache memory is needed.' It also offers structural guidance with 'Consider including: problem definition, relevant context, analysis steps, self-reflection on your reasoning, and conclusions.' However, it doesn't explicitly state when NOT to use this tool or mention specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/flight505/mcp-think-tank'

If you have feedback or need assistance with the MCP directory API, please join our Discord server