Skip to main content
Glama

analyze_task_complexity

Analyzes task descriptions to recommend efficient tools for execution, helping users select appropriate strategies for their specific needs.

Instructions

Analyzes a task to recommend the most efficient tool (The Router).

Args:
    task_description: The user's prompt or task.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
task_descriptionYes

Implementation Reference

  • The primary handler function for the 'analyze_task_complexity' tool. It is decorated with @mcp.tool() for automatic registration with the FastMCP server. The function validates input using TaskComplexityInput schema, analyzes keywords in the task description, and returns a recommendation for strategy, complexity, tool, and reasoning.
    @mcp.tool()
    def analyze_task_complexity(task_description: str) -> dict:
        """
        Analyzes a task to recommend the most efficient tool (The Router).
    
        Args:
            task_description: The user's prompt or task.
        """
        try:
            model = TaskComplexityInput(task_description=task_description)
        except ValidationError as e:
            return {"error": str(e)}
    
        task = model.task_description.lower()
    
        # Strategy: Constructor Mode (Build/Design)
        if any(
            w in task
            for w in [
                "build",
                "create",
                "design",
                "architect",
                "system",
                "bot",
                "assistant",
            ]
        ):
            return {
                "strategy": "constructor",
                "complexity": "Variable",
                "recommended_tool": "design_context_architecture",
                "reasoning": "User wants to build a system/agent. Use the Architect to design a blueprint.",
            }
    
        # Strategy: YOLO Mode (Direct Solve)
        if any(w in task for w in ["project", "repo", "codebase", "architecture"]):
            return {
                "strategy": "yolo",
                "complexity": "Medium",
                "recommended_tool": "project.explore",
                "reasoning": "Task involves project-level understanding.",
            }
        elif any(w in task for w in ["test", "tdd", "verify"]):
            return {
                "strategy": "yolo",
                "complexity": "High",
                "recommended_tool": "workflow.test_driven",
                "reasoning": "Task involves testing or verification workflows.",
            }
        elif any(w in task for w in ["analyze", "reason", "think", "solve", "complex"]):
            return {
                "strategy": "yolo",
                "complexity": "High",
                "recommended_tool": "reasoning.systematic",
                "reasoning": "Task requires structured reasoning.",
            }
        else:
            return {
                "strategy": "yolo",
                "complexity": "Low",
                "recommended_tool": "Standard Molecule",
                "reasoning": "Task appears simple. Use a basic prompt or few-shot molecule.",
            }
  • Pydantic BaseModel schema used for input validation in the analyze_task_complexity tool handler.
    class TaskComplexityInput(BaseModel):
        task_description: str = Field(
            ..., min_length=5, description="The user's prompt or task."
        )
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool analyzes and recommends, but doesn't describe how it works (e.g., algorithm, criteria), what 'The Router' refers to, error handling, or output format. For a tool with no annotations, this leaves significant gaps in understanding its behavior and limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, consisting of two sentences that directly state the purpose and parameter. There's no unnecessary information, and it efficiently communicates the core function. However, it could be slightly more structured by separating usage notes, but overall it's well-sized for its content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of analysis and recommendation, no annotations, no output schema, and low schema coverage, the description is incomplete. It doesn't explain what 'The Router' is, how recommendations are made, or what the output looks like. For a tool that likely returns structured advice, this leaves the agent with insufficient context to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds minimal semantics beyond the input schema. It defines 'task_description' as 'The user's prompt or task,' which clarifies the parameter's purpose but doesn't provide format examples, constraints, or usage tips. With 0% schema description coverage and only 1 parameter, this is adequate but lacks depth, aligning with the baseline for low coverage without full compensation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Analyzes a task to recommend the most efficient tool (The Router).' It specifies the verb 'analyzes' and resource 'task,' and mentions the output 'recommend the most efficient tool,' which distinguishes it from siblings like 'understand_question' or 'verify_logic.' However, it doesn't explicitly differentiate from all siblings, such as 'design_context_architecture,' which might also involve analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'recommend the most efficient tool' but doesn't specify scenarios, prerequisites, or exclusions. Given siblings like 'understand_question' or 'symbolic_abstract,' which might overlap in analysis, the lack of explicit usage context leaves the agent without clear direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/4rgon4ut/sutra'

If you have feedback or need assistance with the MCP directory API, please join our Discord server