browsegrab
A token-efficient browser automation server that enables LLMs to navigate websites, click elements, and extract content using accessibility trees and markdown conversion. It provides eight tools for lightweight browser interaction optimized for performance with local LLM endpoints.
README
browsegrab
Token-efficient browser agent for local LLMs — Playwright + accessibility tree + MarkGrab, MCP native.
browsegrab is a lightweight browser automation library designed for local LLMs (8B-35B parameters). It combines Playwright's accessibility tree with MarkGrab's HTML-to-markdown conversion to achieve 5-8x fewer tokens per step compared to alternatives like browser-use.
Features
- Token-efficient: ~500-1,500 tokens/step (vs 4,000-10,000 for browser-use)
- Local LLM first: Optimized for vLLM, Ollama, and OpenAI-compatible endpoints
- MCP native: Built-in MCP server with 8 browser automation tools
- MarkGrab integration: HTML → clean markdown for content extraction
- Accessibility tree + ref system: Stable element references (
e1,e2, ...) without vision models - Success pattern caching: Zero LLM calls on repeated workflows
- 5-stage JSON parser: Robust action parsing for local LLM outputs
- Minimal dependencies: Only
playwright+httpxin core
Installation
pip install browsegrab
playwright install chromium
With optional features:
pip install browsegrab[mcp] # MCP server support
pip install browsegrab[content] # MarkGrab content extraction
pip install browsegrab[cli] # CLI with rich output
pip install browsegrab[all] # Everything
Quick Start
Python API
from browsegrab import BrowseSession
async with BrowseSession() as session:
# Navigate and get accessibility tree snapshot
await session.navigate("https://example.com")
snap = await session.snapshot()
print(snap.tree_text)
# - heading "Example Domain" [level=1]
# - link "Learn more": [ref=e1]
# Click using ref ID
result = await session.click("e1")
print(result.url) # https://www.iana.org/help/example-domains
# Type into search box
await session.navigate("https://en.wikipedia.org")
snap = await session.snapshot()
await session.type("e4", "Python programming", submit=True)
# Extract compressed content (AX tree + markdown)
content = await session.extract_content()
CLI
# Accessibility tree snapshot
browsegrab snapshot https://example.com
# JSON output
browsegrab snapshot https://example.com -f json
# Extract content (AX tree + markdown)
browsegrab extract https://en.wikipedia.org/wiki/Python
# Agentic browse (requires LLM endpoint)
browsegrab browse https://example.com "Find the about page"
MCP Server
browsegrab-mcp # Start MCP server (stdio)
Claude Desktop / Cursor / VS Code config:
{
"mcpServers": {
"browsegrab": {
"command": "browsegrab-mcp"
}
}
}
8 MCP tools: browser_navigate, browser_click, browser_type, browser_snapshot, browser_scroll, browser_extract_content, browser_go_back, browser_wait
How It Works
browsegrab separates structure (accessibility tree) from content (MarkGrab markdown), sending only what the LLM needs:
Raw HTML
├── Structure: Accessibility tree → interactive elements → [ref=eN]
│ → ~200-500 tokens
└── Content: MarkGrab → clean markdown (on-demand)
→ ~300-800 tokens
Combined: ~500-1,300 tokens per step
Token efficiency (measured)
| Page | Interactive elements | Tokens | browser-use equivalent |
|---|---|---|---|
| example.com | 1 | ~60 | ~500+ |
| Wikipedia article | 452 | ~1,254 | ~10,000+ |
Architecture
browsegrab/
├── config.py # Dataclass configs (env var loading)
├── result.py # Result types (ActionResult, BrowseResult, ...)
├── session.py # BrowseSession orchestrator
├── browser/
│ ├── manager.py # Playwright lifecycle (async context manager)
│ ├── snapshot.py # Accessibility tree + ref system
│ ├── selectors.py # 4-strategy selector resolver
│ └── actions.py # navigate, click, type, scroll, go_back, wait
├── dom/
│ ├── ref_map.py # ref ID ↔ element bidirectional mapping
│ └── compress.py # AX tree + MarkGrab → compressed context
├── llm/
│ ├── base.py # LLMProvider ABC
│ ├── provider.py # vLLM, Ollama, OpenAI-compatible
│ ├── prompt.py # System prompts (~400 tokens)
│ └── parse.py # 5-stage JSON fallback parser
├── agent/
│ ├── history.py # Sliding window history compression
│ ├── cache.py # Domain-based success pattern cache
│ └── loop_guard.py # Duplicate action detection
├── __main__.py # CLI (click)
└── mcp_server.py # FastMCP server (8 tools)
Configuration
All settings via environment variables (BROWSEGRAB_* prefix):
# Browser
BROWSEGRAB_BROWSER_HEADLESS=true
BROWSEGRAB_BROWSER_TIMEOUT_MS=30000
# LLM (for agentic browse)
BROWSEGRAB_LLM_PROVIDER=vllm # vllm | ollama | openai
BROWSEGRAB_LLM_BASE_URL=http://localhost:30000/v1
BROWSEGRAB_LLM_MODEL=Qwen/Qwen3.5-32B-AWQ
# Agent
BROWSEGRAB_AGENT_MAX_STEPS=10
BROWSEGRAB_AGENT_ENABLE_CACHE=true
Part of the QuartzUnit Ecosystem
| Library | Role |
|---|---|
| markgrab | Passive extraction (URL → markdown) |
| snapgrab | Passive capture (URL → screenshot) |
| docpick | Document OCR → structured JSON |
| browsegrab | Active automation (goal → browser actions → results) |
Development
git clone https://github.com/QuartzUnit/browsegrab.git
cd browsegrab
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
playwright install chromium
# Unit tests (no browser needed)
pytest tests/ -m "not e2e"
# Full suite including E2E
pytest tests/ -v
License
<!-- mcp-name: io.github.ArkNill/browsegrab -->
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.