uitars-mcp
Enables AI coding agents to locate UI elements on screen with exact pixel coordinates using a local UI-TARS-2B vision model. Provides offline GUI grounding with ~1.2s latency on 4.1GB VRAM, eliminating cloud dependencies for computer-use automation.
README
uitars-mcp
MCP server that gives AI coding agents local GUI grounding — the ability to find any UI element on screen and return its exact pixel coordinates.
Powered by UI-TARS-2B, ByteDance's 2B parameter GUI grounding model.
Why
Claude Code's built-in computer-use sends every screenshot to the cloud for analysis. This MCP server runs a local vision model instead:
- ~1.2s per element find (vs cloud round-trip latency)
- 4.1GB VRAM (runs on any modern GPU)
- Fully offline — no API keys, no cloud dependency
- 90.7% accuracy on ScreenSpot desktop-text benchmark
- Native pixel coordinates — returns exact click targets
Setup
1. Download UI-TARS-2B
# Requires ~4.5GB disk space
huggingface-cli download bytedance-research/UI-TARS-2B-SFT --local-dir ./ui-tars-2b
2. Install PyTorch with CUDA
# Install CUDA-enabled PyTorch first (adjust cu126 to your CUDA version)
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu126
3. Install uitars-mcp
pip install uitars-mcp
# or from source:
pip install -e .
4. Configure Claude Code
Add to your Claude Code MCP settings (~/.claude/settings.json):
{
"mcpServers": {
"uitars-mcp": {
"command": "uitars-mcp",
"env": {
"UITARS_MODEL": "/path/to/ui-tars-2b"
}
}
}
}
If installed in a venv, use the full path to the executable:
{
"mcpServers": {
"uitars-mcp": {
"command": "/path/to/venv/bin/uitars-mcp",
"env": {
"UITARS_MODEL": "/path/to/ui-tars-2b"
}
}
}
}
Tools
| Tool | What it does | Latency |
|---|---|---|
find_element |
Find a UI element by description, returns click coordinates | ~1.2s |
describe_screen |
Describe everything visible on screen | ~2s |
read_screen_text |
OCR — read all text on screen | ~3s |
check_element |
Check element state (enabled, value, etc.) | ~1s |
verify_action |
Verify an action worked by checking screen state | ~1.5s |
suggest_action |
Suggest next action to achieve a goal | ~1.5s |
benchmark |
Measure end-to-end latency | varies |
How it works
- Takes a screenshot via
mss(fast, cross-platform) - Resizes to 1344px wide (optimal vision token count)
- Runs UI-TARS-2B inference on GPU
- Converts model's 0-1000 normalized coordinates to native screen pixels
- Returns coordinates ready for
computer-useclick tools
The model is lazy-loaded on first call (~3s), then stays in VRAM for subsequent calls.
Environment variables
| Variable | Default | Description |
|---|---|---|
UITARS_MODEL |
(required) | Path to UI-TARS-2B model directory |
Requirements
- Python 3.10+
- NVIDIA GPU with 4.1GB+ VRAM
- CUDA-enabled PyTorch
- Windows or Linux (macOS untested)
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.