Skip to main content
Glama

qontinui-mcp

Lightweight MCP server for Qontinui Runner - enables AI-driven visual automation.

Installation

pip install qontinui-mcp

Quick Start

  1. Start the Qontinui Runner (desktop application)

  2. Configure your AI client (Claude Desktop, Claude Code, Cursor, etc.)

Add to your MCP configuration:

{ "mcpServers": { "qontinui": { "command": "qontinui-mcp", "args": [] } } }
  1. Run workflows via AI

The AI can now:

  • Load workflow configuration files

  • Run visual automation workflows

  • Monitor execution status

  • Control which monitor to use

Configuration

Environment variables:

Variable

Description

Default

QONTINUI_RUNNER_HOST

Runner host address

Auto-detected (WSL-aware)

QONTINUI_RUNNER_PORT

Runner HTTP port

9876

Available Tools

Tool

Description

get_executor_status

Get runner status

list_monitors

List available monitors

load_config

Load a workflow configuration file

ensure_config_loaded

Load config if not already loaded

get_loaded_config

Get loaded configuration info

run_workflow

Run a workflow by name

stop_execution

Stop current execution

Example Usage

# In an AI conversation: "Load the config at /path/to/workflow.json and run the 'login_test' workflow on the left monitor"

Development

# Clone git clone https://github.com/qontinui/qontinui-mcp cd qontinui-mcp # Install dependencies poetry install # Run server locally poetry run qontinui-mcp

License

MIT

-
security - not tested
F
license - not found
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/qontinui/qontinui-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server