uitars-mcp
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@uitars-mcpfind the 'Submit' button and return its coordinates"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
uitars-mcp
MCP server that gives AI coding agents local GUI grounding — the ability to find any UI element on screen and return its exact pixel coordinates.
Powered by UI-TARS-2B, ByteDance's 2B parameter GUI grounding model.
Why
Claude Code's built-in computer-use sends every screenshot to the cloud for analysis. This MCP server runs a local vision model instead:
~1.2s per element find (vs cloud round-trip latency)
4.1GB VRAM (runs on any modern GPU)
Fully offline — no API keys, no cloud dependency
90.7% accuracy on ScreenSpot desktop-text benchmark
Native pixel coordinates — returns exact click targets
Setup
1. Download UI-TARS-2B
# Requires ~4.5GB disk space
huggingface-cli download bytedance-research/UI-TARS-2B-SFT --local-dir ./ui-tars-2b2. Install PyTorch with CUDA
# Install CUDA-enabled PyTorch first (adjust cu126 to your CUDA version)
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu1263. Install uitars-mcp
pip install uitars-mcp
# or from source:
pip install -e .4. Configure Claude Code
Add to your Claude Code MCP settings (~/.claude/settings.json):
{
"mcpServers": {
"uitars-mcp": {
"command": "uitars-mcp",
"env": {
"UITARS_MODEL": "/path/to/ui-tars-2b"
}
}
}
}If installed in a venv, use the full path to the executable:
{
"mcpServers": {
"uitars-mcp": {
"command": "/path/to/venv/bin/uitars-mcp",
"env": {
"UITARS_MODEL": "/path/to/ui-tars-2b"
}
}
}
}Tools
Tool | What it does | Latency |
| Find a UI element by description, returns click coordinates | ~1.2s |
| Describe everything visible on screen | ~2s |
| OCR — read all text on screen | ~3s |
| Check element state (enabled, value, etc.) | ~1s |
| Verify an action worked by checking screen state | ~1.5s |
| Suggest next action to achieve a goal | ~1.5s |
| Measure end-to-end latency | varies |
How it works
Takes a screenshot via
mss(fast, cross-platform)Resizes to 1344px wide (optimal vision token count)
Runs UI-TARS-2B inference on GPU
Converts model's 0-1000 normalized coordinates to native screen pixels
Returns coordinates ready for
computer-useclick tools
The model is lazy-loaded on first call (~3s), then stays in VRAM for subsequent calls.
Environment variables
Variable | Default | Description |
| (required) | Path to UI-TARS-2B model directory |
Requirements
Python 3.10+
NVIDIA GPU with 4.1GB+ VRAM
CUDA-enabled PyTorch
Windows or Linux (macOS untested)
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Lxsoftroxs/uitars-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server