Skip to main content
Glama

run_task

Execute advanced AI tasks using state-of-the-art models for reasoning, analysis, and more. Start single or batch tasks with real-time progress monitoring and retrieve results using task IDs. Ideal for complex workflows requiring AI-driven insights.

Instructions

Start a complex AI task. Perform advanced reasoning and analysis with state of the art LLMs. Start multiple tasks at once by using an array for model. Returns a task ID immediately (or batch ID for multiple models) to check status and retrieve results.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contextNoOptional: Background context for the task
filesNoOptional: Array of file paths to include in the task context
modelNoOptional: Single model OR array of models for batch execution. Defaults to 'standard' if not specified.
outputNoOptional: The desired output/success state
read_onlyNoOptional: When true, excludes tools that can modify files, execute commands, or make changes. Only allows read/search/analysis tools.
taskYesThe task prompt - what to perform (required)

Implementation Reference

  • Tool schema definition for 'run_task', specifying input parameters including model (single or array for batch), task prompt, optional context, output goal, files, and read_only mode.
    const RUN_TASK_TOOL: Tool = { name: 'run_task', description: 'Start a complex AI task. Perform advanced reasoning and analysis with state of the art LLMs. Start multiple tasks at once by using an array for model. Returns a task ID immediately (or batch ID for multiple models) to check status and retrieve results.', annotations: { title: 'Run AI Task', readOnlyHint: false, // Creates and executes a new task destructiveHint: false, // Doesn't destroy existing data idempotentHint: false, // Each call creates a new task openWorldHint: true, // Task may interact with external services/APIs }, inputSchema: { type: 'object', properties: { model: { oneOf: [ { type: 'string', description: `Model class OR specific model name. Classes: ${MODEL_CLASSES.join(', ')}. Popular models: ${POPULAR_MODELS.join(', ')}.`, enum: [...MODEL_CLASSES, ...POPULAR_MODELS], }, { type: 'array', description: `Array of model classes or specific model names for batch execution`, items: { type: 'string', enum: [...MODEL_CLASSES, ...POPULAR_MODELS], }, }, ], description: `Optional: Single model OR array of models for batch execution. Defaults to 'standard' if not specified.`, }, task: { type: 'string', description: 'The task prompt - what to perform (required)', }, context: { type: 'string', description: 'Optional: Background context for the task', }, output: { type: 'string', description: 'Optional: The desired output/success state', }, files: { type: 'array', description: 'Optional: Array of file paths to include in the task context', items: { type: 'string', }, }, read_only: { type: 'boolean', description: 'Optional: When true, excludes tools that can modify files, execute commands, or make changes. Only allows read/search/analysis tools.', default: false, }, }, required: ['task'], }, };
  • src/serve.ts:558-579 (registration)
    MCP server registration of 'run_task' tool in the listTools response, including it in the tools array alongside related task management tools.
    server.setRequestHandler(ListToolsRequestSchema, async () => { if (process.env.MCP_MODE !== 'true') { logger.debug('Received ListTools request'); } const response = { tools: [ RUN_TASK_TOOL, CHECK_TASK_STATUS_TOOL, GET_TASK_RESULT_TOOL, CANCEL_TASK_TOOL, WAIT_FOR_TASK_TOOL, LIST_TASKS_TOOL, ], }; if (process.env.MCP_MODE !== 'true') { logger.debug( 'Returning tools:', response.tools.map(t => t.name) ); } return response; });
  • Core handler logic for executing 'run_task': validates input, handles single/batch models, builds full prompt with context/files, selects tools based on read_only, creates Agent with todo tools, uses TaskManager to create and execute task asynchronously, returns task_id/batch_id immediately for non-blocking operation.
    // run_task implementation (now async) // Lazy load Agent class and createToolFunction if (!AgentClass) { const ensembleModule = await import('@just-every/ensemble'); AgentClass = ensembleModule.Agent; } // Validate task parameter if (!args.task || typeof args.task !== 'string') { throw new Error('Task parameter is required and must be a string'); } // Parse model if it's a JSON string let modelParam = args.model; if (typeof args.model === 'string' && args.model.startsWith('[')) { try { modelParam = JSON.parse(args.model); } catch { // If parsing fails, keep it as is modelParam = args.model; } } // Check if batch execution (array of models) const isBatch = Array.isArray(modelParam); const models = isBatch ? modelParam : [modelParam || 'standard']; // SAFETY CHECK: Prevent multiple models with write access to avoid file conflicts if (!args.read_only && isBatch && models.length > 1) { throw new Error( 'Multiple models with write access (read_only: false) is not allowed to prevent file conflicts. ' + 'Please either:\n' + '1. Set read_only: true when using multiple models, or\n' + '2. Use a single model when write access is needed.' ); } // Generate batch ID for grouped tasks const batchId = isBatch ? `batch-${Date.now()}-${Math.random().toString(36).substr(2, 9)}` : undefined; if (process.env.MCP_MODE !== 'true') { logger.info( `Processing ${isBatch ? 'batch' : 'single'} task request` ); logger.debug('Task parameters:', { originalModel: args.model, parsedModel: modelParam, isBatch: isBatch, models: models, batchId: batchId, context: args.context, task: args.task, output: args.output, }); } // Build the task prompt let fullPrompt = ''; if (args.context) { fullPrompt += `Context:\n${args.context}\n\n`; } // Include file contents if provided if (args.files && Array.isArray(args.files) && args.files.length > 0) { fullPrompt += 'Files provided:\n'; for (const filePath of args.files) { try { const { readFile } = await import('fs/promises'); const content = await readFile(filePath, 'utf8'); fullPrompt += `\n=== ${filePath} ===\n${content}\n=== End of ${filePath} ===\n\n`; } catch (error: any) { fullPrompt += `\n=== ${filePath} ===\nError reading file: ${error.message}\n=== End of ${filePath} ===\n\n`; } } } fullPrompt += `Task:\n${args.task}`; if (args.output) { fullPrompt += `\n\nDesired Output:\n${args.output}`; } // Create task with tools - filter based on read_only flag const searchTools = await getSearchTools(); const crawlTools = await getCrawlTools(); const customTools = args.read_only ? getReadOnlyTools() : getAllTools(); // Combine search tools, crawl tools, and custom tools // Note: Search and crawl tools are generally read-only (search, fetch, etc.) const allTools = [...searchTools, ...crawlTools, ...customTools]; // Get current working directory and file list const cwd = process.cwd(); const { readdirSync, statSync } = await import('fs'); const { join } = await import('path'); const files = readdirSync(cwd); const fileList = files .map(f => { const isDirectory = statSync(join(cwd, f)).isDirectory(); return `\t${f}${isDirectory ? '/' : ''}`; }) .join('\n'); // Create and execute tasks for each model const taskIds: string[] = []; for (const model of models) { // Determine model configuration for this specific model let modelClass: string | undefined; let modelName: string | undefined; if (model) { // Check if it's a model class if (MODEL_CLASSES.includes(model.toLowerCase())) { modelClass = model.toLowerCase(); } else { // It's a specific model name modelName = model; } } else { // Default to standard class if no model specified modelClass = 'standard'; } // Generate task ID from model and task words const modelPart = (modelName || modelClass || 'standard') .toLowerCase() .replace(/[^a-z0-9]/g, ''); const taskWords = args.task .toLowerCase() .replace(/[^a-z0-9\s]/g, '') .split(/\s+/) .filter((word: string) => word.length > 2) .slice(0, 3) .join('-'); const baseTaskId = `${modelPart}-${taskWords}`; // Ensure uniqueness by adding a suffix if needed let taskId = baseTaskId; let suffix = 1; while (taskManager.getTask(taskId)) { taskId = `${baseTaskId}-${suffix}`; suffix++; } // Create task with custom ID and batch ID taskManager.createTask({ id: taskId, model: modelName, modelClass: modelClass, batchId: batchId, context: args.context, task: args.task, output: args.output, files: args.files, readOnly: args.read_only, }); // Create a TodoManager instance for this specific agent const todoManager = new TodoManager(); // Get todo tools bound to this instance const todoTools = todoManager.getTodoTools(); // Combine all tools including todo tools const agentTools = [...allTools, ...todoTools]; // Create agent with tools and todo support const agent = new AgentClass({ name: 'TaskRunner', modelClass: modelClass as any, model: modelName, instructions: `You are a helpful AI assistant that can complete complex tasks. You are working in the ${cwd} directory. The current directory contains: ${fileList} You have a range of tools available to you to explore your environment and solve problems. ${args.read_only ? 'You are in READ ONLY mode. You can read files, search the web, and analyze data, but you cannot modify any files or execute commands that change the system state.' : 'You can read files, search the web, and analyze data, and you can also modify files or execute commands that change the system state.'} You have access to todo management tools to help organize and track your work: - todo_add: Add new todos (e.g., todo_add(["Task 1"]) or todo_add(["Task 1", "Task 2"])) - todo_update: Update status or content (e.g., todo_update("todo-1", {status: "in_progress"})) - todo_complete: Mark todos as completed (e.g., todo_complete(["todo-1"]) or todo_complete(["todo-1", "todo-2"])) - todo_delete: Remove specific todos (e.g., todo_delete(["todo-1"]) or todo_delete(["todo-1", "todo-2"])) - todo_clear: Clear all todos when done Your current todo list will be shown to you at the start of each message. Use these tools to break down complex tasks and track your progress.`, tools: agentTools, onRequest: todoManager.getOnRequestHandler(), }); // Start task execution in background (non-blocking) taskManager.executeTask(taskId, agent, fullPrompt).catch(error => { logger.error(`Background task ${taskId} failed:`, error); // Ensure task is marked as failed const task = taskManager.getTask(taskId); if (task && task.status === 'running') { logger.error( `Marking stuck task ${taskId} as failed after error` ); task.status = 'failed'; task.output = `ERROR: Task execution failed: ${error.message}`; task.completedAt = new Date(); } }); taskIds.push(taskId); if (process.env.MCP_MODE !== 'true') { logger.info(`Task ${taskId} queued for execution`); } } // Return appropriate response based on single vs batch if (isBatch) { return { content: [ { type: 'text', text: JSON.stringify( { batch_id: batchId, task_ids: taskIds, status: 'pending', message: `${taskIds.length} tasks queued for execution. Use list_tasks with batch_id to monitor progress.`, }, null, 2 ), }, ], }; } else { return { content: [ { type: 'text', text: JSON.stringify( { task_id: taskIds[0], status: 'pending', message: 'Task queued for execution. Use check_task_status to monitor progress.', }, null, 2 ), }, ], }; } } catch (error: any) { logger.error('Error executing task:', error.message); logger.debug('Error stack:', error.stack); throw new Error( `Failed to execute task: ${error instanceof Error ? error.message : 'Unknown error'}` ); } });
  • TaskManager.createTask method called by run_task handler to initialize a new task record with status 'pending' and store it for background execution.
    public createTask(params: { id?: string; model?: string; modelClass?: string; batchId?: string; context?: string; task: string; output?: string; files?: string[]; readOnly?: boolean; }): string { const taskId = params.id || uuid(); const taskInfo: TaskInfo = { id: taskId, status: 'pending', model: params.model, modelClass: params.modelClass, batchId: params.batchId, context: params.context, task: params.task, output: params.output, files: params.files, readOnly: params.readOnly, createdAt: new Date(), messages: [], requestCount: 0, abortController: new AbortController(), lastActivityTime: new Date(), errorCount: 0, }; this.tasks.set(taskId, taskInfo); logger.info(`Created task ${taskId}`); return taskId;
  • TaskManager.executeTask method invoked asynchronously by run_task to run the Agent with prompt, process task events (complete/error), update status/output, handle timeouts/health checks, and manage resources.
    public async executeTask( taskId: string, agent: Agent, prompt: string ): Promise<void> { const task = this.tasks.get(taskId); if (!task) { throw new Error(`Task ${taskId} not found`); } // Update status to running task.status = 'running'; task.startedAt = new Date(); task.lastActivityTime = new Date(); task.taskAgent = agent; // Store agent for taskStatus() calls logger.info(`Starting execution of task ${taskId}`); // Set up task timeout this.setupTaskTimeout(taskId); // Set up task health monitoring this.setupTaskHealthCheck(taskId); // Register with watchdog const watchdog = getWatchdog(); watchdog.watchTask(taskId, task); try { // Run the task and store generator for potential taskStatus() calls const stream = runTask(agent, prompt); task.taskGenerator = stream; // Process events for await (const event of stream) { // Check if task was cancelled if (task.abortController?.signal.aborted) { task.status = 'cancelled'; task.completedAt = new Date(); task.output = 'Task was cancelled'; logger.info(`Task ${taskId} was cancelled`); break; } logger.debug(`Task ${taskId} event: ${event.type}`); // Update last activity time task.lastActivityTime = new Date(); // Update watchdog const watchdog = getWatchdog(); watchdog.updateActivity(taskId); // Store all events in messages for full history task.messages.push({ type: event.type, content: event, timestamp: new Date().toISOString(), }); // Handle errors if ( event.type === 'error' || event.type === 'task_fatal_error' ) { const errorMessage = (event as any).error?.message || (event as any).result || 'Unknown error'; task.errorCount = (task.errorCount || 0) + 1; task.lastError = errorMessage; watchdog.recordError(taskId, errorMessage); } else { // Reset error count on successful activity task.errorCount = 0; task.lastError = undefined; } if (event.type === 'task_complete') { const completeEvent = event as any; task.output = completeEvent.result || 'Task completed without output'; task.finalState = completeEvent.finalState || null; task.status = 'completed'; task.completedAt = new Date(); logger.info(`Task ${taskId} completed successfully`); logger.debug(`Output: ${task.output}`); break; } else if (event.type === 'task_fatal_error') { const errorEvent = event as any; const errorMessage = errorEvent.error?.message || errorEvent.result || 'Unknown error'; task.output = `ERROR: ${errorMessage}`; task.status = 'failed'; task.completedAt = new Date(); this.cleanupTaskResources(taskId); logger.error(`Task ${taskId} failed: ${errorMessage}`); break; } else if (event.type === 'response_output') { // response_output events indicate completed LLM requests const responseEvent = event as any; task.requestCount = (task.requestCount || 0) + 1; logger.debug( `Response output (request ${task.requestCount}): ${responseEvent.message?.content?.substring(0, 100) || 'No content'}` ); } } // Clean up generator and agent references task.taskGenerator = undefined; task.taskAgent = undefined; // If we exit the loop without setting a final status, mark as completed if (task.status === 'running') { task.status = 'completed'; task.completedAt = new Date(); task.output = task.output || 'Task ended without explicit completion'; this.cleanupTaskResources(taskId); } } catch (error: any) { task.status = 'failed'; task.output = `ERROR: ${error.message}`; task.completedAt = new Date(); task.taskGenerator = undefined; task.taskAgent = undefined; this.cleanupTaskResources(taskId); logger.error(`Task ${taskId} execution error:`, error); // Track error for monitoring task.errorCount = (task.errorCount || 0) + 1; task.lastError = error.message; } finally { // Always ensure cleanup this.cleanupTaskResources(taskId); } }

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/just-every/mcp-task'

If you have feedback or need assistance with the MCP directory API, please join our Discord server