Skip to main content
Glama

delegate_task

Delegate tasks to cost-effective AI models or summarize vault documents. Process small files directly and automatically delegate large file summarization to workers when available.

Instructions

Offload work to a cheaper model or summarize vault files.

When project is provided, reads a vault file. Small files (≤50 lines) are returned directly. Large files are auto-delegated to a worker for summarization — falls back to raw content if workers are unavailable.

Args: prompt: The task description or code to process. context: Optional system context for the model. model: 'auto', 'ollama', 'openrouter-free', 'openrouter' (paid), or model ID. max_tokens: Maximum tokens in the response. max_cost_per_request: Max USD. 0 = free models only. project: Project slug for vault summarization mode. section: Shortcut name for summarization. Ignored if path is set. path: Relative path to a .md file. Overrides section. max_summary_lines: Target summary length for summarization.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptNo
contextNo
modelNoauto
max_tokensNo
max_cost_per_requestNo
projectNo
sectionNocontext
pathNo
max_summary_linesNo

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mlorentedev/hive'

If you have feedback or need assistance with the MCP directory API, please join our Discord server