Skip to main content
Glama

MCP Task

by just-every

@just-every/mcp-task

Async MCP server for running long-running AI tasks with real-time progress monitoring using @just-every/task.

Quick Start

1. Create or use an environment file

Option A: Create a new .llm.env file in your home directory:

# Download example env file curl -o ~/.llm.env https://raw.githubusercontent.com/just-every/mcp-task/main/.env.example # Edit with your API keys nano ~/.llm.env

Option B: Use an existing .env file (must use absolute path):

# Example: /Users/yourname/projects/myproject/.env # Example: /home/yourname/workspace/.env

2. Install

Claude Code
# Using ~/.llm.env claude mcp add task -s user -e ENV_FILE=$HOME/.llm.env -- npx -y @just-every/mcp-task # Using existing .env file (absolute path required) claude mcp add task -s user -e ENV_FILE=/absolute/path/to/your/.env -- npx -y @just-every/mcp-task # For debugging, check if ENV_FILE is being passed correctly: claude mcp list
Other MCP Clients

Add to your MCP configuration:

{ "mcpServers": { "task": { "command": "npx", "args": ["-y", "@just-every/mcp-task"], "env": { "ENV_FILE": "/path/to/.llm.env" } } } }

Available Tools

run_task

Start a long-running AI task asynchronously. Returns a task ID immediately.

Parameters:

  • task (required): The task prompt - what to perform
  • model (optional): Model class or specific model name
  • context (optional): Background context for the task
  • output (optional): The desired output/success state

Returns: Task ID for monitoring progress

check_task_status

Check the status of a running task with real-time progress updates.

Parameters:

  • task_id (required): The task ID returned from run_task

Returns: Current status, progress summary, recent events, and tool calls

get_task_result

Get the final result of a completed task.

Parameters:

  • task_id (required): The task ID returned from run_task

Returns: The complete output from the task

cancel_task

Cancel a pending or running task.

Parameters:

  • task_id (required): The task ID to cancel

Returns: Cancellation status

list_tasks

List all tasks with their current status.

Parameters:

  • status_filter (optional): Filter by status (pending, running, completed, failed, cancelled)

Returns: Task statistics and summaries

Example Workflow

// 1. Start a task const startResponse = await callTool('run_task', { "model": "standard", "task": "Search for the latest AI news and summarize", "output": "A bullet-point summary of 5 recent AI developments" }); // Returns: { "task_id": "abc-123", "status": "pending", ... } // 2. Check progress const statusResponse = await callTool('check_task_status', { "task_id": "abc-123" }); // Returns: { "status": "running", "progress": "Searching for AI news...", ... } // 3. Get result when complete const resultResponse = await callTool('get_task_result', { "task_id": "abc-123" }); // Returns: The complete summary

Supported Models

Model Classes

  • reasoning: Complex reasoning and analysis
  • vision: Image and visual processing
  • standard: General purpose tasks
  • mini: Lightweight, fast responses
  • reasoning_mini: Lightweight reasoning
  • code: Code generation and analysis
  • writing: Creative and professional writing
  • summary: Text summarization
  • vision_mini: Lightweight vision processing
  • long: Long-form content generation
  • claude-opus-4: Anthropic's most powerful model
  • grok-4: xAI's latest Grok model
  • gemini-2.5-pro: Google's Gemini Pro
  • o3, o3-pro: OpenAI's o3 models
  • And any other model name supported by @just-every/ensemble

Integrated Tools

Tasks have access to:

  • Web Search: Search the web for information using @just-every/search
  • Command Execution: Run shell commands via the run_command tool

API Keys

The task runner requires API keys for the AI models you want to use. Add them to your .llm.env file:

# Core AI Models ANTHROPIC_API_KEY=your-anthropic-key OPENAI_API_KEY=your-openai-key XAI_API_KEY=your-xai-key # For Grok models GOOGLE_API_KEY=your-google-key # For Gemini models # Search Providers (optional, for web_search tool) BRAVE_API_KEY=your-brave-key SERPER_API_KEY=your-serper-key PERPLEXITY_API_KEY=your-perplexity-key OPENROUTER_API_KEY=your-openrouter-key

Getting API Keys

Task Lifecycle

  1. Pending: Task created and queued
  2. Running: Task is being executed with live progress via taskStatus()
  3. Completed: Task finished successfully
  4. Failed: Task encountered an error
  5. Cancelled: Task was cancelled by user

Tasks are automatically cleaned up after 24 hours.

CLI Usage

The task runner can also be used directly from the command line:

# Run as MCP server (for debugging) ENV_FILE=~/.llm.env npx @just-every/mcp-task # Or if installed globally npm install -g @just-every/mcp-task ENV_FILE=~/.llm.env mcp-task serve

Development

Setup

# Clone the repository git clone https://github.com/just-every/mcp-task.git cd mcp-task # Install dependencies npm install # Build for production npm run build

Development Mode

# Run in development mode with your env file ENV_FILE=~/.llm.env npm run serve:dev

Testing

# Run tests npm test # Type checking npm run typecheck # Linting npm run lint

Architecture

mcp-task/ ├── src/ │ ├── serve.ts # MCP server implementation │ ├── index.ts # CLI entry point │ └── utils/ │ ├── task-manager.ts # Async task lifecycle management │ └── logger.ts # Logging utilities ├── bin/ │ └── mcp-task.js # Executable entry └── package.json

Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Submit a pull request

Troubleshooting

MCP Server Shows "Failed" in Claude

If you see "task ✘ failed" in Claude, check these common issues:

  1. Missing API Keys: The most common issue is missing API keys. Check that your ENV_FILE is properly configured:
    # Test if ENV_FILE is working ENV_FILE=/path/to/your/.llm.env npx @just-every/mcp-task
  2. Incorrect Installation Command: Make sure you're using -e for environment variables:
    # Correct - environment variable passed with -e flag before -- claude mcp add task -s user -e ENV_FILE=$HOME/.llm.env -- npx -y @just-every/mcp-task # Incorrect - trying to pass as argument claude mcp add task -s user -- npx -y @just-every/mcp-task --env ENV_FILE=$HOME/.llm.env
  3. Path Issues: ENV_FILE must use absolute paths:
    # Good ENV_FILE=/Users/yourname/.llm.env ENV_FILE=$HOME/.llm.env # Bad ENV_FILE=.env ENV_FILE=~/.llm.env # ~ not expanded in some contexts
  4. Verify Installation: Check your MCP configuration:
    claude mcp list
  5. Debug Mode: For detailed error messages, run manually:
    ENV_FILE=/path/to/.llm.env npx @just-every/mcp-task

Task Not Progressing

  • Check task status with check_task_status to see live progress
  • Look for error messages prefixed with "ERROR:" in the output
  • Verify API keys are properly configured

Model Not Found

  • Ensure model name is correctly spelled
  • Check that required API keys are set for the model provider
  • Popular models: claude-opus-4, grok-4, gemini-2.5-pro, o3

Task Cleanup

  • Completed tasks are automatically cleaned up after 24 hours
  • Use list_tasks to see all active and recent tasks
  • Cancel stuck tasks with cancel_task

License

MIT

Author

Created by Just Every - Building powerful AI tools for developers.

-
security - not tested
A
license - permissive license
-
quality - not tested

remote-capable server

The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.

Async MCP server for running long-running AI tasks with real-time progress monitoring, enabling users to start, monitor, and manage complex AI workflows across multiple models.

  1. Quick Start
    1. 1. Create or use an environment file
    2. 2. Install
  2. Available Tools
    1. run_task
    2. check_task_status
    3. get_task_result
    4. cancel_task
    5. list_tasks
  3. Example Workflow
    1. Supported Models
      1. Model Classes
      2. Popular Models
    2. Integrated Tools
      1. API Keys
        1. Getting API Keys
      2. Task Lifecycle
        1. CLI Usage
          1. Development
            1. Setup
            2. Development Mode
            3. Testing
          2. Architecture
            1. Contributing
              1. Troubleshooting
                1. MCP Server Shows "Failed" in Claude
                2. Task Not Progressing
                3. Model Not Found
                4. Task Cleanup
              2. License
                1. Author

                  Related MCP Servers

                  • A
                    security
                    A
                    license
                    A
                    quality
                    A powerful MCP server that provides interactive user feedback and command execution capabilities for AI-assisted development, featuring a graphical interface with text and image support.
                    Last updated -
                    1
                    33
                    Python
                    MIT License
                  • A
                    security
                    F
                    license
                    A
                    quality
                    An intelligent MCP server that orchestrates multiple MCP servers with AI-enhanced workflow automation and production-ready context engine capabilities for codebase analysis.
                    Last updated -
                    3
                    TypeScript
                  • -
                    security
                    A
                    license
                    -
                    quality
                    An enhanced MCP server that provides intelligent memory and task management for AI assistants, featuring semantic search, automatic task extraction, and knowledge graphs to help manage development workflows.
                    Last updated -
                    11
                    Python
                    MIT License
                    • Apple
                    • Linux
                  • -
                    security
                    A
                    license
                    -
                    quality
                    An MCP server that lets agents and humans monitor and control long-running processes, reducing copy-pasting between AI tools and enabling multiple agents to interact with the same process outputs.
                    Last updated -
                    4
                    Python
                    MIT License

                  View all related MCP servers

                  MCP directory API

                  We provide all the information about MCP servers via our MCP API.

                  curl -X GET 'https://glama.ai/api/mcp/v1/servers/just-every/mcp-task'

                  If you have feedback or need assistance with the MCP directory API, please join our Discord server