Skip to main content
Glama

think_more

Enhance problem-solving and decision-making by exploring questions or thoughts more deeply with structured suggestions and guidance.

Instructions

Get guidance for thinking more deeply.

This tool provides suggestions and guidance for thinking more deeply about a specific query or thought.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesThe query to think more deeply about

Implementation Reference

  • Primary MCP tool handler for 'think_more', registered via @mcp.tool() decorator. Wraps the core implementation by calling think_more_impl with the query and None for thought_id, then returns JSON response.
    @mcp.tool()
    def think_more(query: str = Field(description="The query to think more deeply about")) -> str:
        """
        Get guidance for thinking more deeply.
    
        This tool provides suggestions and guidance for thinking more deeply
        about a specific query or thought.
        """
        # Extract actual value if it's a Field object
        if hasattr(query, "default"):
            query = query.default
    
        result = think_more_impl(query, None)
        return json.dumps(result, indent=2)
  • Pydantic input schema for the 'query' parameter using Field with description.
    def think_more(query: str = Field(description="The query to think more deeply about")) -> str:
  • Core helper function implementing the think_more logic. Retrieves relevant thought from storage, computes suggested depth, generates guidance based on depth_directive, and returns structured response.
    def think_more(depth_directive: str, thought_id: Optional[int] = None) -> Dict[str, Any]:
        """Get guidance for thinking more deeply about a thought."""
        thoughts = _storage.get_thoughts()
    
        if not thoughts:
            return {"success": False, "message": "No previous thoughts exist"}
    
        if thought_id is None:
            # Use the last thought
            source_thought = thoughts[-1]
        else:
            matching = [t for t in thoughts if t["thought_id"] == thought_id]
            if not matching:
                return {"success": False, "message": f"No thought found with ID {thought_id}"}
            source_thought = matching[0]
    
        # Calculate suggested depth
        current_depth = source_thought.get("depth", 1)
        suggested_depth = current_depth + 1
    
        guidance = "Consider exploring:"
        if depth_directive in ["deeper", "harder"]:
            guidance += "\n- Root causes and underlying principles"
            guidance += "\n- Alternative perspectives and approaches"
        elif depth_directive == "again":
            guidance += "\n- What assumptions might be wrong?"
            guidance += "\n- What important aspects were missed?"
        else:  # "more"
            guidance += "\n- Additional implications and consequences"
            guidance += "\n- Related areas to investigate"
    
        return {
            "success": True,
            "source_thought": source_thought,
            "suggested_depth": suggested_depth,
            "guidance": guidance,
            "message": f"Here's how to think {depth_directive} about this",
        }
  • Import of the think_more implementation aliased as think_more_impl, enabling the handler to delegate to it.
    from .think_tool import think_more as think_more_impl
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool 'provides suggestions and guidance' but doesn't describe what form these take (e.g., text responses, structured advice, examples), whether it's interactive, or any limitations. For a tool with zero annotation coverage, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that directly address the tool's function. It's front-loaded with the core purpose and avoids unnecessary elaboration. However, the second sentence could be slightly more specific to improve clarity without sacrificing brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete for understanding this tool's behavior. It doesn't explain what the output looks like (e.g., text suggestions, structured data), any constraints on the input query, or how it differs from similar tools. For a guidance-providing tool with no structured metadata, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'query' clearly documented as 'The query to think more deeply about'. The description adds no additional parameter information beyond what's in the schema, so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'provides suggestions and guidance for thinking more deeply about a specific query or thought', which gives a general purpose but lacks specificity about what kind of suggestions or guidance it provides. It doesn't clearly distinguish from sibling tools like 'think' or 'should_think', making it somewhat vague about its exact function within the toolset.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'think' or 'should_think'. There's no mention of prerequisites, appropriate contexts, or exclusions. The agent must infer usage based on the name and description alone without explicit direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/smian0/mcp-agile-flow'

If you have feedback or need assistance with the MCP directory API, please join our Discord server