Skip to main content
Glama
SDGLBL
by SDGLBL

think

Log structured thought processes for complex reasoning tasks like bug fixing, test troubleshooting, refactoring planning, feature design, and debugging analysis without making repository changes.

Instructions

Use the tool to think about something. It will not obtain new information or make any changes to the repository, but just log the thought. Use it when complex reasoning or brainstorming is needed. Ensure thinking content is concise and accurate, without needing to include code details

Common use cases:

  1. When exploring a repository and discovering the source of a bug, call this tool to brainstorm several unique ways of fixing the bug, and assess which change(s) are likely to be simplest and most effective

  2. After receiving test results, use this tool to brainstorm ways to fix failing tests

  3. When planning a complex refactoring, use this tool to outline different approaches and their tradeoffs

  4. When designing a new feature, use this tool to think through architecture decisions and implementation details

  5. When debugging a complex issue, use this tool to organize your thoughts and hypotheses

  6. When considering changes to the plan or shifts in thinking that the user has not previously mentioned, consider whether it is necessary to confirm with the user.

<think_example> Feature Implementation Planning

  • New code search feature requirements:

  • Search for code patterns across multiple files

  • Identify function usages and references

  • Analyze import relationships

  • Generate summary of matching patterns

  • Implementation considerations:

  • Need to leverage existing search mechanisms

  • Should use regex for pattern matching

  • Results need consistent format with other search methods

  • Must handle large codebases efficiently

  • Design approach:

  1. Create new CodeSearcher class that follows existing search patterns

  2. Implement core pattern matching algorithm

  3. Add result formatting methods

  4. Integrate with file traversal system

  5. Add caching for performance optimization

  • Testing strategy:

  • Unit tests for search accuracy

  • Integration tests with existing components

  • Performance tests with large codebases </think_example>

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
thoughtYesThe detailed thought process to record

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The core execution logic for the 'think' tool handler. Validates the 'thought' parameter, logs it using the tool context, and returns a confirmation message without making external changes.
    async def call(
        self,
        ctx: MCPContext,
        **params: Unpack[ThinkingToolParams],
    ) -> str:
        """Execute the tool with the given parameters.
    
        Args:
            ctx: MCP context
            **params: Tool parameters
    
        Returns:
            Tool result
        """
        tool_ctx = create_tool_context(ctx)
        tool_ctx.set_tool_info(self.name)
    
        # Extract parameters
        thought = params.get("thought")
    
        # Validate required thought parameter
        if not thought:
            await tool_ctx.error(
                "Parameter 'thought' is required but was None or empty"
            )
            return "Error: Parameter 'thought' is required but was None or empty"
    
        if thought.strip() == "":
            await tool_ctx.error("Parameter 'thought' cannot be empty")
            return "Error: Parameter 'thought' cannot be empty"
    
        # Log the thought but don't take action
        await tool_ctx.info("Thinking process recorded")
    
        # Return confirmation
        return "I've recorded your thinking process. You can continue with your next action based on this analysis."
  • Pydantic-based input schema defining the required 'thought' parameter with validation (non-empty string).
    Thought = Annotated[
        str,
        Field(
            description="The detailed thought process to record",
            min_length=1,
        ),
    ]
    
    
    class ThinkingToolParams(TypedDict):
        """Parameters for the ThinkingTool.
    
        Attributes:
            thought: The detailed thought process to record
        """
    
        thought: Thought
  • Instantiates and registers the ThinkingTool instance using ToolRegistry, returning it for further use.
    def register_thinking_tool(
        mcp_server: FastMCP,
    ) -> list[BaseTool]:
        """Register thinking tools with the MCP server.
    
        Args:
            mcp_server: The FastMCP server instance
        """
        thinking_tool = ThinkingTool()
        ToolRegistry.register_tool(mcp_server, thinking_tool)
        return [thinking_tool]
  • Top-level registration call during all_tools setup, integrating the 'think' tool into the comprehensive tool registry.
    # Initialize and register thinking tool
    thinking_tool = register_thinking_tool(mcp_server)
    for tool in thinking_tool:
        all_tools[tool.name] = tool
  • The tool's register method defines and decorates the actual 'think' handler function with @mcp_server.tool, delegating to self.call().
    def register(self, mcp_server: FastMCP) -> None:
        """Register this thinking tool with the MCP server.
    
        Creates a wrapper function with explicitly defined parameters that match
        the tool's parameter schema and registers it with the MCP server.
    
        Args:
            mcp_server: The FastMCP server instance
        """
        tool_self = self  # Create a reference to self for use in the closure
    
        @mcp_server.tool(name=self.name, description=self.description)
        async def think(
            ctx: MCPContext,
            thought: Thought,
        ) -> str:
            ctx = get_context()
            return await tool_self.call(ctx, thought=thought)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly states that the tool 'will not obtain new information or make any changes to the repository', effectively communicating its read-only, non-destructive nature. It also implies logging behavior and encourages concise, accurate thinking without code details, though it doesn't specify output format or any rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with key information but includes an extensive example and six detailed use cases that may be overly verbose. While the example is helpful, it occupies significant space without adding critical guidance that isn't already implied. Some sentences could be condensed to improve efficiency without losing value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, 100% schema coverage, output schema exists) and lack of annotations, the description is reasonably complete. It covers purpose, usage guidelines, and behavioral traits adequately. The output schema handles return values, so the description needn't explain them. However, the extensive example slightly detracts from focus on core information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'thought' well-documented. The description adds minimal parameter semantics beyond the schema, only implying through the example that thoughts should be structured and detailed. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, though the description doesn't compensate with additional syntax or format details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'to think about something' and 'just log the thought'. It specifies that it doesn't obtain new information or make changes, which distinguishes it from sibling tools like edit, write, or run_command that perform actual modifications. However, it doesn't explicitly contrast with all siblings (e.g., read or grep), keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use the tool: 'when complex reasoning or brainstorming is needed' and lists six specific use cases with examples. It also includes a caution about confirming with users when considering unmentioned changes, offering clear when/when-not guidance that helps differentiate from action-oriented siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SDGLBL/mcp-claude-code'

If you have feedback or need assistance with the MCP directory API, please join our Discord server