Server Configuration
Describes the environment variables required to run the server.
Name | Required | Description | Default |
---|---|---|---|
No arguments |
Schema
Prompts
Interactive templates invoked by user choice
Name | Description |
---|---|
solve | Solve a complicated problem with multiple state-of-the-art LLMs |
plan | Create a comprehensive plan using multiple state-of-the-art LLMs working in parallel |
code | Generate or modify code using a state-of-the-art coding LLM |
Resources
Contextual data attached and managed by the client
Name | Description |
---|---|
No resources |
Tools
Functions exposed to the LLM to take actions
Name | Description |
---|---|
run_task | Start a complex AI task. Perform advanced reasoning and analysis with state of the art LLMs. Start multiple tasks at once by using an array for model. Returns a task ID immediately (or batch ID for multiple models) to check status and retrieve results. |
check_task_status | Check the status of a running task. Returns current status, progress, and partial results if available. |
get_task_result | Get the final result of a completed task. |
cancel_task | Cancel a pending or running task, or all tasks in a batch. |
wait_for_task | Wait for a task or any task in a batch to complete, fail, or be cancelled. Only waits for tasks that complete AFTER this call is made - ignores tasks that were already completed. |
list_tasks | List all tasks with their current status. |