Skip to main content
Glama

think

Analyze complex problems with step-by-step reasoning using local GPU models. Process multi-step challenges, architecture decisions, and debugging strategies with structured analysis.

Instructions

Deep reasoning for complex problems using local GPU with extended thinking. Offloads complex analysis to local LLM - zero API costs.

WHEN TO USE:

  • Complex multi-step problems requiring careful reasoning

  • Architecture decisions, trade-off analysis

  • Debugging strategies, refactoring plans

  • Any situation requiring "thinking through" before acting

Args: problem: The problem or question to think through (required) context: Supporting information - code, docs, constraints (optional) depth: Reasoning depth level: - "quick" → Fast answer, no extended thinking (14B model) - "normal" → Balanced reasoning with thinking (14B coder) - "deep" → Thorough multi-step analysis (30B+ MoE model)

ROUTING:

  • Uses largest available GPU for deep thinking

  • Automatically enables thinking mode for normal/deep

  • Prefers local GPU, falls back to remote if needed

Returns: Structured analysis with step-by-step reasoning and conclusions

Examples: think(problem="How should we handle authentication?", depth="deep") think(problem="Debug this error", context="", depth="normal")

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
problemYes
contextNo
depthNonormal

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/zbrdc/delia'

If you have feedback or need assistance with the MCP directory API, please join our Discord server