Skip to main content
Glama

llm_analyze

Route complex analysis tasks to advanced reasoning models for deep data analysis, code review, and problem decomposition.

Instructions

Deep analysis task — routes to the strongest reasoning model.

Best for: data analysis, code review, problem decomposition, debugging.

Args:
    prompt: What to analyze.
    complexity: Task complexity — "simple", "moderate", or "complex". Analysis tasks
        default to at least moderate. Pass "complex" for multi-file reviews or
        architecture decisions that warrant Opus/o3.
    system_prompt: Optional system instructions.
    max_tokens: Maximum output tokens.
    context: Optional conversation context to help the model understand the broader task.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYes
complexityNo
system_promptNo
max_tokensNo
contextNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It adds valuable behavioral context about model routing ('strongest reasoning model', specific mention of Opus/o3) and default complexity settings. However, it lacks disclosure on idempotency, cost implications, or caching behavior that would be necessary for a full 4-5 score without annotation support.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Highly structured with clear sections (purpose, Best for, Args). Every sentence provides specific value—no filler. The complexity guidance is dense with actionable decision criteria ('Analysis tasks default to at least moderate') without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description appropriately focuses on inputs and usage context. It successfully differentiates this tool from the 25+ sibling LLM tools via the 'deep analysis' and 'strongest reasoning model' framing. Missing explicit comparison to specific siblings (like llm_research) prevents a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by documenting all 5 parameters. It provides especially rich semantics for the complexity parameter (enumerating values 'simple', 'moderate', 'complex' and mapping them to specific use cases like 'multi-file reviews'), which goes well beyond basic type information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines the tool as performing 'Deep analysis' that 'routes to the strongest reasoning model,' distinguishing it from sibling tools like llm_generate or llm_classify. It specifies the resource being operated on (analysis tasks) and the verb (routes/analyzes).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Best for' criteria (data analysis, code review, debugging) and detailed guidance on the complexity parameter (when to use 'complex' for multi-file reviews). However, it does not explicitly name alternative tools (e.g., 'use llm_generate for simple text generation instead') or state when NOT to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ypollak2/llm-router'

If you have feedback or need assistance with the MCP directory API, please join our Discord server