Skip to main content
Glama

process_thought

Facilitate structured thinking by creating, questioning, and refining ideas across stages like research, analysis, and synthesis to enhance problem-solving and decision-making.

Instructions

Engage in a flexible and evolving thinking process by creating, questioning, validating, and refining ideas to progressively deepen understanding and generate effective solutions. When needing to gather data, analyze, or research, prioritize reviewing relevant project code; if such code doesn't exist, search the web rather than speculating. Set nextThoughtNeeded to false when thinking is sufficient, otherwise adjust total_thoughts to extend the process

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
assumptions_challengedNoAssumptions challenged, an array of strings
axioms_usedNoAxioms used, an array of strings
next_thought_neededYesWhether next thought step is needed
stageYesThinking stage, available stages include: problem definition, information gathering, research, analysis, synthesis, conclusion, questioning, planning
tagsNoThought tags, an array of strings
thoughtYesThought content
thought_numberYesCurrent thought number
total_thoughtsYesEstimated total number of thoughts, can be changed anytime if more thinking is needed

Implementation Reference

  • The main handler function processThought that takes input parameters, converts them to the required format, calls getProcessThoughtPrompt to generate formatted output, and returns it as MCP content.
    export async function processThought(
      params: z.infer<typeof processThoughtSchema>
    ) {
      try {
        // Convert parameters to standardized ThoughtData format
        const thoughtData: ProcessThoughtPromptParams = {
          thought: params.thought,
          thoughtNumber: params.thought_number,
          totalThoughts: params.total_thoughts,
          nextThoughtNeeded: params.next_thought_needed,
          stage: params.stage,
          tags: params.tags || [],
          axioms_used: params.axioms_used || [],
          assumptions_challenged: params.assumptions_challenged || [],
        };
    
        // Ensure thought number doesn't exceed total thoughts
        if (thoughtData.thoughtNumber > thoughtData.totalThoughts) {
          // Automatically adjust total thought count
          thoughtData.totalThoughts = thoughtData.thoughtNumber;
        }
    
        // Format thought output
        const formattedThought = getProcessThoughtPrompt(thoughtData);
    
        // Return successful response
        return {
          content: [
            {
              type: "text" as const,
              text: formattedThought,
            },
          ],
        };
      } catch (error) {
        // Catch and handle all unexpected errors
        const errorMessage = error instanceof Error ? error.message : "Unknown error";
        return {
          content: [
            {
              type: "text" as const,
              text: `Error occurred while processing thought: ${errorMessage}`,
            },
          ],
        };
      }
    }
  • Zod schema defining the input parameters for the process_thought tool, including validation and descriptions.
    export const processThoughtSchema = z.object({
      thought: z
        .string()
        .min(1, {
          message: "Thought content cannot be empty, please provide valid thinking content",
        })
        .describe("Thought content"),
      thought_number: z
        .number()
        .int()
        .positive({
          message: "Thought number must be a positive integer",
        })
        .describe("Current thought number"),
      total_thoughts: z
        .number()
        .int()
        .positive({
          message: "Total thoughts must be a positive integer",
        })
        .describe("Estimated total number of thoughts, can be changed anytime if more thinking is needed"),
      next_thought_needed: z.boolean().describe("Whether next thought step is needed"),
      stage: z
        .string()
        .min(1, {
          message: "Thought stage cannot be empty, please provide a valid thinking stage",
        })
        .describe(
          "Thinking stage, available stages include: problem definition, information gathering, research, analysis, synthesis, conclusion, questioning, planning"
        ),
      tags: z.array(z.string()).optional().describe("Thought tags, an array of strings"),
      axioms_used: z
        .array(z.string())
        .optional()
        .describe("Axioms used, an array of strings"),
      assumptions_challenged: z
        .array(z.string())
        .optional()
        .describe("Assumptions challenged, an array of strings"),
    });
  • src/index.ts:319-324 (registration)
    Registration of the process_thought tool in the ListTools response, specifying name, description from template, and input schema.
      name: "process_thought",
      description: loadPromptFromTemplate(
        "toolsDescription/processThought.md"
      ),
      inputSchema: zodToJsonSchema(processThoughtSchema),
    },
  • src/index.ts:552-562 (registration)
    Dispatch handler in the CallToolRequest switch case that validates arguments with processThoughtSchema and calls the processThought function.
    case "process_thought":
      parsedArgs = await processThoughtSchema.safeParseAsync(
        request.params.arguments
      );
      if (!parsedArgs.success) {
        throw new Error(
          `Invalid arguments for tool ${request.params.name}: ${parsedArgs.error.message}`
        );
      }
      result = await processThought(parsedArgs.data);
      return result;
  • Helper function getProcessThoughtPrompt that generates the formatted prompt string used by the handler, based on thought parameters and templates.
    export function getProcessThoughtPrompt(
      param: ProcessThoughtPromptParams
    ): string {
      let nextThoughtNeeded = "";
      if (param.nextThoughtNeeded) {
        nextThoughtNeeded = loadPromptFromTemplate("processThought/moreThought.md");
      } else {
        nextThoughtNeeded = loadPromptFromTemplate(
          "processThought/completedThought.md"
        );
      }
    
      const indexTemplate = loadPromptFromTemplate("processThought/index.md");
    
      const prompt = generatePrompt(indexTemplate, {
        thought: param.thought,
        thoughtNumber: param.thoughtNumber,
        totalThoughts: param.totalThoughts,
        stage: param.stage,
        tags: param.tags.join(", ") || "no tags",
        axioms_used: param.axioms_used.join(", ") || "no axioms used",
        assumptions_challenged:
          param.assumptions_challenged.join(", ") || "no assumptions challenged",
        nextThoughtNeeded,
      });
    
      return loadPrompt(prompt, "PROCESS_THOUGHT");
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds some context by describing the thinking process as 'flexible and evolving' and advising on data gathering priorities, but it doesn't cover key behavioral traits like whether this tool is read-only or destructive, its permission requirements, rate limits, or what the output looks like. The description doesn't contradict annotations (none exist), but it's incomplete for a tool with 8 parameters and no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences that are front-loaded with the core purpose. Every sentence earns its place by adding guidance on data gathering and parameter usage. However, it could be slightly more structured by separating purpose from instructions more clearly, and there's minor redundancy in describing the thinking process.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of 8 parameters, no annotations, and no output schema, the description is somewhat complete but has gaps. It covers the purpose and some usage guidelines but lacks details on behavioral traits, full parameter context, and expected outputs. For a tool with this level of complexity and no structured support, it should do more to compensate, making it adequate but not fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 8 parameters thoroughly. The description adds minimal value beyond the schema: it mentions 'Set nextThoughtNeeded to false when thinking is sufficient, otherwise adjust total_thoughts to extend the process,' which provides usage hints for two parameters but doesn't explain the semantics of others like 'assumptions_challenged' or 'stage.' Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool enables 'a flexible and evolving thinking process' with verbs like 'creating, questioning, validating, and refining ideas,' which gives a general purpose. However, it's somewhat vague about what specific resource or domain this applies to, and it doesn't clearly distinguish from sibling tools like 'analyze_task' or 'reflect_task,' which might overlap in cognitive functions. The description avoids tautology by not just restating the name, but lacks specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool: it specifies to 'prioritize reviewing relevant project code' for data gathering and 'search the web rather than speculating' if code doesn't exist, which offers practical guidance. However, it doesn't explicitly state when not to use this tool or name alternatives among the sibling tools (e.g., vs. 'analyze_task'), so it falls short of a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/liorfranko/mcp-chain-of-thought'

If you have feedback or need assistance with the MCP directory API, please join our Discord server