Skip to main content
Glama

detect_thinking_directive

Analyze text to identify directives prompting deeper thinking, such as 'think harder' or 'think again', within MCP Agile Flow workflows.

Instructions

Detect thinking directives.

This tool analyzes text to detect directives suggesting deeper thinking, such as "think harder", "think deeper", "think again", etc.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYesThe text to analyze for thinking directives

Implementation Reference

  • MCP handler function decorated with @mcp.tool(). It extracts the input text, calls the core implementation (detect_thinking_directive_impl), and returns the result as JSON string. Includes input schema via Pydantic Field.
    @mcp.tool()
    def detect_thinking_directive(
        text: str = Field(description="The text to analyze for thinking directives"),
    ) -> str:
        """
        Detect thinking directives.
    
        This tool analyzes text to detect directives suggesting deeper thinking,
        such as "think harder", "think deeper", "think again", etc.
        """
        # Extract actual value if it's a Field object
        if hasattr(text, "default"):
            text = text.default
    
        result = detect_thinking_directive_impl(text)
        return json.dumps(result, indent=2)
  • Core helper function implementing the detection logic. Checks for specific phrases indicating thinking directives (deeper, harder, again, more) in the input text and returns detection results with type and confidence.
    def detect_thinking_directive(text: str) -> Dict[str, Any]:
        """Detect if text contains a directive to think more deeply."""
        directives = {
            "deeper": ["think deeper", "think more deeply", "dive deeper"],
            "harder": ["think harder", "think more carefully"],
            "again": [
                "think again",
                "rethink",
                "consider again",
                "think about this again",
                "think about it again",
            ],
            "more": ["think more", "more thoughts", "additional thoughts"],
        }
    
        text = text.lower()
        for directive_type, phrases in directives.items():
            if any(phrase in text for phrase in phrases):
                return {
                    "detected": True,
                    "directive_type": directive_type,
                    "confidence": "medium",  # All directives have medium confidence
                    "message": f"Detected '{directive_type}' thinking directive",
                }
    
        return {
            "detected": False,
            "directive_type": None,
            "confidence": "low",
            "message": "No thinking directive detected",
        }
  • Import of the core implementation function aliased as detect_thinking_directive_impl, used by the handler.
    from .think_tool import detect_thinking_directive as detect_thinking_directive_impl
  • Pydantic input schema definition for the 'text' parameter using Field with description.
    text: str = Field(description="The text to analyze for thinking directives"),
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool 'analyzes text' and detects specific phrases, but doesn't describe what the analysis entails, the format or confidence of results, whether it's read-only or has side effects, or any performance characteristics. For a detection tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with only two sentences: one stating the purpose and one providing concrete examples. Every word earns its place with zero redundancy. It's front-loaded with the core function and efficiently supplements with illustrative phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (text analysis for pattern detection), no annotations, no output schema, and 100% schema coverage, the description is minimally adequate. It explains what the tool does but doesn't cover behavioral aspects, output format, or edge cases. The completeness is borderline viable but has clear gaps for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting the single 'text' parameter. The description adds minimal value beyond the schema, only reinforcing that the text is 'to analyze for thinking directives'. No additional semantics about text format, length limits, or preprocessing are provided. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'analyzes text to detect directives suggesting deeper thinking' with specific examples like 'think harder', 'think deeper', 'think again'. It distinguishes itself from sibling tools like 'think', 'think_more', or 'should_think' by focusing on detection rather than execution. However, it doesn't explicitly contrast with 'should_think' which might also involve directive evaluation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when analyzing text for thinking directives, but provides no explicit guidance on when to use this versus alternatives like 'should_think' or 'process_natural_language'. It doesn't mention prerequisites, limitations, or scenarios where this tool is preferred over siblings. The context is clear but lacks comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/smian0/mcp-agile-flow'

If you have feedback or need assistance with the MCP directory API, please join our Discord server