Skip to main content
Glama

MCP Task

by just-every

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Schema

Prompts

Interactive templates invoked by user choice

NameDescription
solveSolve a complicated problem with multiple state-of-the-art LLMs
planCreate a comprehensive plan using multiple state-of-the-art LLMs working in parallel
codeGenerate or modify code using a state-of-the-art coding LLM

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
run_task

Start a complex AI task. Perform advanced reasoning and analysis with state of the art LLMs. Start multiple tasks at once by using an array for model. Returns a task ID immediately (or batch ID for multiple models) to check status and retrieve results.

check_task_status

Check the status of a running task. Returns current status, progress, and partial results if available.

get_task_result

Get the final result of a completed task.

cancel_task

Cancel a pending or running task, or all tasks in a batch.

wait_for_task

Wait for a task or any task in a batch to complete, fail, or be cancelled. Only waits for tasks that complete AFTER this call is made - ignores tasks that were already completed.

list_tasks

List all tasks with their current status.

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/just-every/mcp-task'

If you have feedback or need assistance with the MCP directory API, please join our Discord server