Skip to main content
Glama

get_routing_guidance

Determine which specialized agent should handle a task and provide the exact CLI command to run for delegation guidance.

Instructions

Get routing guidance for a task - returns which agent should handle it and the exact CLI command to run (guidance only, no execution)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesThe task query to get routing guidance for

Implementation Reference

  • The main tool handler in call_tool that processes the query, calls classification and delegation logic from the engine, and returns the recommended agent.
    if name == "get_routing_guidance":
        # Get routing guidance without executing the task
        query = arguments["query"]
    
        # Classify the task to determine routing
        task_info = self.engine._classify_task(query)
        task_type = task_info[0] if isinstance(task_info, tuple) else task_info
        timeout = task_info[1] if isinstance(task_info, tuple) and len(task_info) > 1 else 300
    
        # Determine which agent should handle it
        agent, _ = self.engine._determine_delegation(query, None)
    
        # KISS: Just return the agent name
        # - "gemini" / "aider" / "copilot" → delegate to that agent
        # - "claude" → orchestrator handles directly (since Claude is the orchestrator)
        response = agent if agent else "claude"
    
        return [TextContent(type="text", text=response)]
  • Tool registration in list_tools(), including name, description, and input schema.
    Tool(
        name="get_routing_guidance",
        description="Get routing guidance for a task - returns which agent should handle it and the exact CLI command to run (guidance only, no execution)",
        inputSchema={
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": "The task query to get routing guidance for",
                },
            },
            "required": ["query"],
        },
    ),
  • Helper method _classify_task used by the handler to determine task type and recommended timeout based on keyword matching.
    def _classify_task(self, query: str) -> tuple[str, int]:
        """Classify task type and return recommended timeout."""
        query_lower = query.lower()
    
        keywords = {
            "security_audit": ["security", "vulnerability", "audit", "cve", "exploit", "penetration"],
            "vulnerability_scan": ["scan", "vulnerability", "vuln", "security issue"],
            "code_review": ["review", "code quality", "best practice", "lint"],
            "architecture": ["architecture", "design", "system design", "structure"],
            "refactoring": ["refactor", "restructure", "clean up", "improve code"],
            "quick_fix": ["fix", "bug", "error", "issue", "broken"],
            "documentation": ["document", "docs", "readme", "guide", "explain"],
            "testing": ["test", "unittest", "integration test", "e2e"],
            "performance": ["performance", "optimize", "speed", "latency", "benchmark"],
            "git_workflow": ["commit", "push", "rebase", "merge", "cherry-pick", "squash", "git history"],
            "github_operations": ["pull request", "pr create", "pr review", "issue create", "release"],
        }
    
        # Timeout presets based on task complexity
        TIMEOUT_PRESETS = {
            "quick_fix": 60,           # 1 min - simple bug fixes
            "refactoring": 300,        # 5 min - code refactoring
            "security_audit": 600,     # 10 min - comprehensive security review
            "code_review": 600,        # 10 min - full code review
            "performance": 900,        # 15 min - profiling/optimization
            "testing": 300,            # 5 min - test generation
            "documentation": 180,      # 3 min - documentation writing
            "architecture": 300,       # 5 min - design work
            "vulnerability_scan": 300, # 5 min - automated scanning
            "git_workflow": 180,       # 3 min - git operations
            "github_operations": 240,  # 4 min - GitHub API operations
            "general": 300,            # 5 min - default
        }
    
        for task_type, terms in keywords.items():
            if any(term in query_lower for term in terms):
                timeout = TIMEOUT_PRESETS.get(task_type, 300)
                return task_type, timeout
    
        return "general", 300
  • Helper method _determine_delegation used by the handler to decide the target agent based on complexity, rules, and capability ranking.
    def _determine_delegation(
        self,
        query: str,
        force_delegate: str | None,
    ) -> tuple[str, DelegationRule | None]:
        """
        Determine which orchestrator should handle the query using capability-based routing.
    
        Returns:
            tuple: (target_orchestrator, matching_rule)
        """
        # Force delegation overrides everything
        if force_delegate:
            logger.info(f"Routing: FORCED → {force_delegate}")
            return force_delegate, None
    
        # Check task complexity first - simple tasks handled directly by Claude
        complexity = self._estimate_task_complexity(query)
        if complexity == "simple":
            logger.info(f"Routing: SIMPLE task → claude (delegation overhead not worth it)")
            return "claude", None
    
        # Check explicit delegation rules
        rule = self.config.find_delegation_rule(query)
        if rule:
            logger.info(f"Routing: {rule.pattern} → {rule.delegate_to} (rule-based)")
            return rule.delegate_to, rule
    
        # Use capability-based routing for medium/complex tasks
        if self.config.routing_strategy in ["capability", "hybrid"]:
            ranked = self._rank_by_capabilities(query)
            if ranked:
                task_type, _ = self._classify_task(query)  # Unpack tuple
                # If top ranked agent is Claude, check if delegation is still worth it
                if ranked[0] == "claude" and complexity == "medium":
                    logger.info(f"Routing: {task_type} → claude (best match, medium complexity)")
                    return "claude", None
                logger.info(f"Routing: {task_type} [{complexity}] → {ranked[0]} (capability-based)")
                return ranked[0], None
    
        # Fallback to primary orchestrator
        logger.info(f"Routing: DEFAULT → claude")
        return "claude", None
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns guidance without execution, which is helpful. However, it doesn't address important behavioral aspects like whether this requires authentication, has rate limits, what happens with invalid queries, or the format/structure of the guidance returned. For a tool with zero annotation coverage, this leaves significant gaps in understanding its operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and well-structured in a single sentence that clearly communicates the core functionality ('Get routing guidance for a task'), specifies the outputs ('returns which agent should handle it and the exact CLI command to run'), and adds crucial behavioral context ('guidance only, no execution'). Every word earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no annotations and no output schema, the description provides adequate basic information about what the tool does but lacks completeness. It doesn't describe the format of the guidance returned, error conditions, or operational constraints. For a tool that presumably returns structured routing decisions, more context about the output would be helpful, though the description meets minimum viable standards.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the single parameter 'query' well-documented in the schema as 'The task query to get routing guidance for'. The description adds no additional parameter information beyond what the schema provides. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't enhance parameter understanding but doesn't need to compensate for schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get routing guidance for a task' with specific outputs ('which agent should handle it and the exact CLI command to run'). It distinguishes from potential siblings by specifying this is 'guidance only, no execution'. However, it doesn't explicitly differentiate from 'discover_agents' or 'list_agents' which likely list agents rather than provide routing decisions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating it provides 'guidance only, no execution', suggesting this tool should be used when you need routing advice rather than actual task execution. However, it doesn't explicitly state when to use this versus the sibling tools 'discover_agents' or 'list_agents', nor does it provide any exclusion criteria or prerequisites for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/carlosduplar/multi-agent-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server