Skip to main content
Glama

uitars-mcp

MCP server that gives AI coding agents local GUI grounding — the ability to find any UI element on screen and return its exact pixel coordinates.

Powered by UI-TARS-2B, ByteDance's 2B parameter GUI grounding model.

Why

Claude Code's built-in computer-use sends every screenshot to the cloud for analysis. This MCP server runs a local vision model instead:

  • ~1.2s per element find (vs cloud round-trip latency)

  • 4.1GB VRAM (runs on any modern GPU)

  • Fully offline — no API keys, no cloud dependency

  • 90.7% accuracy on ScreenSpot desktop-text benchmark

  • Native pixel coordinates — returns exact click targets

Setup

1. Download UI-TARS-2B

# Requires ~4.5GB disk space
huggingface-cli download bytedance-research/UI-TARS-2B-SFT --local-dir ./ui-tars-2b

2. Install PyTorch with CUDA

# Install CUDA-enabled PyTorch first (adjust cu126 to your CUDA version)
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu126

3. Install uitars-mcp

pip install uitars-mcp
# or from source:
pip install -e .

4. Configure Claude Code

Add to your Claude Code MCP settings (~/.claude/settings.json):

{
  "mcpServers": {
    "uitars-mcp": {
      "command": "uitars-mcp",
      "env": {
        "UITARS_MODEL": "/path/to/ui-tars-2b"
      }
    }
  }
}

If installed in a venv, use the full path to the executable:

{
  "mcpServers": {
    "uitars-mcp": {
      "command": "/path/to/venv/bin/uitars-mcp",
      "env": {
        "UITARS_MODEL": "/path/to/ui-tars-2b"
      }
    }
  }
}

Tools

Tool

What it does

Latency

find_element

Find a UI element by description, returns click coordinates

~1.2s

describe_screen

Describe everything visible on screen

~2s

read_screen_text

OCR — read all text on screen

~3s

check_element

Check element state (enabled, value, etc.)

~1s

verify_action

Verify an action worked by checking screen state

~1.5s

suggest_action

Suggest next action to achieve a goal

~1.5s

benchmark

Measure end-to-end latency

varies

How it works

  1. Takes a screenshot via mss (fast, cross-platform)

  2. Resizes to 1344px wide (optimal vision token count)

  3. Runs UI-TARS-2B inference on GPU

  4. Converts model's 0-1000 normalized coordinates to native screen pixels

  5. Returns coordinates ready for computer-use click tools

The model is lazy-loaded on first call (~3s), then stays in VRAM for subsequent calls.

Environment variables

Variable

Default

Description

UITARS_MODEL

(required)

Path to UI-TARS-2B model directory

Requirements

  • Python 3.10+

  • NVIDIA GPU with 4.1GB+ VRAM

  • CUDA-enabled PyTorch

  • Windows or Linux (macOS untested)

-
security - not tested
A
license - permissive license
-
quality - not tested

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Lxsoftroxs/uitars-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server