LLDB MCP Server
Provides structured debugging capabilities through LLDB, enabling AI assistants to set breakpoints, inspect variables, analyze crashes, disassemble code, and evaluate expressions in C/C++ programs.
README
LLDB MCP Server
An MCP (Model Context Protocol) server that provides structured debugging tools for LLDB, designed for use with Claude Code and other MCP-compatible AI assistants.
Features
This server exposes LLDB debugging capabilities through well-defined MCP tools:
Execution Control
- lldb_run - Run a program with optional breakpoints and arguments
- lldb_analyze_crash - Analyze crash dumps and core files
Breakpoints & Watchpoints
- lldb_set_breakpoint - Set breakpoints by function, file:line, or address
- lldb_watchpoint - Set watchpoints to break on variable access
Inspection
- lldb_examine_variables - View local variables and arguments
- lldb_backtrace - Get stack traces for all threads
- lldb_registers - View CPU register values
- lldb_read_memory - Read and display memory contents
- lldb_threads - List all threads and their states
Code Analysis
- lldb_disassemble - Disassemble functions or address ranges
- lldb_source - List source code with line numbers
- lldb_symbols - Look up symbols by name, regex, or address
- lldb_images - List loaded executables and shared libraries
Expression Evaluation
- lldb_evaluate - Evaluate C/C++ expressions in debug context
Utilities
- lldb_run_command - Run arbitrary LLDB commands
- lldb_help - Get help on LLDB commands
- lldb_version - Show LLDB version info
Requirements
- Python 3.10+
- LLDB (with command-line tool in PATH)
mcp[cli]Python package
Installing LLDB
Ubuntu/Debian:
sudo apt install lldb
macOS:
# LLDB comes with Xcode Command Line Tools
xcode-select --install
Windows:
# Install via LLVM releases or Visual Studio
winget install LLVM.LLVM
Installation
Option 1: Install from source
# Clone the repository
git clone https://github.com/yourusername/lldb-mcp.git
cd lldb-mcp
# Install dependencies
pip install -e .
Option 2: Install dependencies directly
pip install "mcp[cli]" pydantic httpx
Configuration for Claude Code
Add the following to your Claude Code MCP configuration file:
Location of config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
Configuration:
{
"mcpServers": {
"lldb": {
"command": "python",
"args": ["/path/to/lldb-mcp/lldb_mcp_server.py"]
}
}
}
Or if installed as a package:
{
"mcpServers": {
"lldb": {
"command": "lldb-mcp"
}
}
}
Using uvx (recommended for isolation):
{
"mcpServers": {
"lldb": {
"command": "uvx",
"args": ["--from", "/path/to/lldb-mcp", "lldb-mcp"]
}
}
}
Usage Examples
Once configured, you can ask Claude Code to help with debugging tasks:
Analyze a Crash
"Analyze the crash dump in ./core and the executable ./myprogram to find what caused the segfault"
Set Breakpoints and Examine State
"Set a breakpoint at the processData function in processor.cpp, run the program with argument 'test.txt', and show me the local variables when it stops"
Disassemble Code
"Show me the assembly for the main function in ./myprogram"
Evaluate Expressions
"Run ./myprogram until it hits parseConfig and evaluate the expression config->max_threads"
Memory Inspection
"Read 128 bytes of memory at address 0x7fff5fbff000 in hexadecimal format"
Symbol Lookup
"Find all symbols matching 'parse.*' regex in ./myprogram"
Tool Reference
lldb_run_command
Execute any LLDB command directly.
{
"command": "help breakpoint", # Any LLDB command
"target": "./myprogram", # Optional: executable to load
"working_dir": "/path/to/dir" # Optional: working directory
}
lldb_analyze_crash
Analyze crash dumps with full context.
{
"executable": "./myprogram",
"core_file": "./core", # Optional: core dump
"response_format": "markdown" # or "json"
}
lldb_set_breakpoint
Set breakpoints with conditions.
{
"executable": "./myprogram",
"location": "main.cpp:42", # or "functionName" or "0x400500"
"condition": "i > 100" # Optional: break condition
}
lldb_examine_variables
View variables at a breakpoint.
{
"executable": "./myprogram",
"breakpoint": "processData",
"variables": ["buffer", "size"], # Optional: specific vars
"args": ["input.txt"], # Optional: program args
"response_format": "markdown"
}
lldb_disassemble
Disassemble code regions.
{
"executable": "./myprogram",
"target": "main", # Function name, address range, or "current"
"show_bytes": true, # Show opcode bytes
"mixed": true # Interleave source
}
lldb_read_memory
Read memory contents.
{
"executable": "./myprogram",
"address": "0x7fff5fbff000",
"count": 64, # Bytes to read
"format": "x", # x=hex, b=binary, d=decimal, s=string
"breakpoint": "main" # Optional: stop here first
}
lldb_evaluate
Evaluate C/C++ expressions.
{
"executable": "./myprogram",
"expression": "ptr->data[5]",
"breakpoint": "processBuffer",
"args": ["test.dat"]
}
lldb_backtrace
Get stack traces.
{
"executable": "./myprogram",
"breakpoint": "handleError", # or use core_file
"core_file": "./core", # For post-mortem
"all_threads": true,
"limit": 50,
"response_format": "json" # Structured output
}
lldb_registers
View CPU registers.
{
"executable": "./myprogram",
"breakpoint": "criticalSection",
"register_set": "general", # general, float, vector, all
"specific_registers": ["rax", "rbx", "rsp"] # Optional
}
lldb_watchpoint
Set data watchpoints.
{
"executable": "./myprogram",
"variable": "global_counter",
"watch_type": "write", # write, read, read_write
"condition": "global_counter > 1000"
}
lldb_symbols
Look up symbols.
{
"executable": "./myprogram",
"query": "process.*",
"query_type": "regex" # name, regex, address, type
}
Alternative: Using LLDB's Built-in MCP Server
LLDB 18+ has built-in MCP support. To use it instead:
-
Start LLDB and enable MCP:
(lldb) protocol-server start MCP listen://localhost:59999 -
Configure Claude Code to connect via netcat:
{ "mcpServers": { "lldb": { "command": "/usr/bin/nc", "args": ["localhost", "59999"] } } }
Note: LLDB's built-in MCP only exposes a single lldb_command tool, whereas this server provides structured, specialized tools for better AI integration.
Development
Running Tests
pytest tests/
Type Checking
mypy lldb_mcp_server.py
Linting
ruff check lldb_mcp_server.py
ruff format lldb_mcp_server.py
Troubleshooting
"LLDB executable not found"
Ensure LLDB is installed and in your PATH:
which lldb
lldb --version
Permission denied on core files
On Linux, enable core dumps:
ulimit -c unlimited
sudo sysctl -w kernel.core_pattern=core.%p
Debugger can't find symbols
Compile your programs with debug info:
g++ -g -O0 myprogram.cpp -o myprogram
clang++ -g -O0 myprogram.cpp -o myprogram
License
MIT License - see LICENSE file for details.
Contributing
Contributions welcome! Please read CONTRIBUTING.md for guidelines.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.