blend-ai
An MCP server that enables AI assistants to control Blender through 108 specialized tools for 3D modeling, animation, and rendering. It provides a secure, thread-safe interface to execute validated operations in Blender using natural language commands.
README
blend-ai
The most intuitive and efficient MCP Server for Blender. Control Blender entirely through AI assistants like Claude — create 3D models, set up scenes, animate, render, and more, all through natural language.
<small>This was created via Claude Code using the Haiku model and 20 random reference images. It took 5 minutes:</small>

Key Features
- 161 tools covering every major Blender domain: modeling, mesh editing, materials, shader nodes, lighting, camera, animation, rendering, sculpting, UV mapping, physics, geometry nodes, rigging, curves, grease pencil, collections, file I/O, Bool Tool, and viewport control
- Render-aware — automatically detects when Blender is rendering and queues commands instead of hanging
- Zero telemetry — no usage tracking, no analytics, no data collection. Everything runs locally.
- Zero-dependency Blender addon — the addon uses only Python stdlib +
bpy. Nothing to pip install inside Blender's bundled Python. - Thread-safe architecture — background TCP server with queue-based main-thread execution, respecting Blender's single-threaded API constraint
- MCP resources — browse scene objects, materials, and scene info as structured context
- Workflow prompts — pre-built prompt templates for common tasks (product shots, character base meshes, scene cleanup, turntable animations)
- Best practices prompt — guides AI clients toward preferred tools (e.g., Bool Tool auto ops over manual boolean modifiers)
Quickstart
1. Install the MCP server
git clone https://github.com/jabberwock/blend-ai.git
cd blend-ai
uv pip install -e .
2. Install the Blender addon
- Download the latest addon zip from GitHub Releases
- Open Blender (4.0+)
- Go to Edit > Preferences > Add-ons > Install from Disk...
- Select the downloaded
.zipfile - Enable "blend-ai" in the addon list
<details> <summary><strong>Developer install (symlink)</strong></summary>
If you're developing on blend-ai, symlink the addon folder instead:
# macOS
ln -s "$(pwd)/addon" ~/Library/Application\ Support/Blender/5.0/scripts/addons/blend_ai
# Linux
ln -s "$(pwd)/addon" ~/.config/blender/5.0/scripts/addons/blend_ai
# Windows (run as admin)
mklink /D "%APPDATA%\Blender Foundation\Blender\5.0\scripts\addons\blend_ai" "%cd%\addon"
Then enable the addon in Blender preferences.
</details>
3. Start the server in Blender
In Blender's 3D Viewport, open the N-panel (press N), find the blend-ai tab, and click Start Server. The addon listens on 127.0.0.1:9876.
4. Connect your AI assistant
<details> <summary><strong>Claude Code</strong></summary>
claude mcp add blend-ai -- uv run --directory /path/to/blend-ai blend-ai
Replace /path/to/blend-ai with the actual path to your clone. Make sure Blender is running with the addon server started before using the tools.
Usage:
$ claude
> Create a red metallic sphere on a white plane with three-point lighting
> Add a subdivision surface modifier to the sphere and set it to level 3
> Set up a turntable animation and render it to /tmp/turntable/
</details>
<details> <summary><strong>Claude Desktop</strong></summary>
Add blend-ai to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"blend-ai": {
"command": "uv",
"args": ["run", "--directory", "/path/to/blend-ai", "blend-ai"]
}
}
}
Replace /path/to/blend-ai with the actual path to your clone. Or copy the contents of the bundled mcp.json into your config file.
Restart Claude Desktop. The Blender tools will appear in the tool list.
</details>
<details> <summary><strong>Other MCP Clients</strong></summary>
blend-ai is a standard MCP server using stdio transport. Any MCP-compatible client can connect using the mcp.json config or by running the server directly:
uv run --directory /path/to/blend-ai blend-ai
# or: python -m blend_ai.server
The server communicates over stdin/stdout using the MCP protocol. It connects to Blender's addon over TCP on 127.0.0.1:9876.
</details>
Tool Domains
<details> <summary><strong>All 161 tools across 24 modules</strong></summary>
| Domain | Tools | Highlights |
|---|---|---|
| Scene | 5 | Get scene info, set frame range, manage scenes |
| Objects | 14 | Create primitives, duplicate, parent, join, visibility, origin, convert, auto-smooth |
| Transforms | 6 | Position, rotation (euler/quat), scale, apply, snap |
| Modeling | 13 | Modifiers, booleans, subdivide, extrude, bevel, loop cut, bridge edge loops |
| Mesh Editing | 16 | Inset, fill, grid fill, mark seam/sharp, normals, dissolve, knife project, spin, crease |
| Bool Tool | 4 | Auto union, difference, intersect, slice (via Blender's Bool Tool addon) |
| Materials | 15 | Principled BSDF, textures, blend modes, shader node graph (add/connect/remove nodes) |
| Lighting | 7 | Point/sun/spot/area lights, HDRIs, light rigs, shadows |
| Camera | 6 | Create, aim, DOF, viewport capture, active camera |
| Animation | 8 | Keyframes, interpolation, frame range, follow path |
| Rendering | 6 | Engine, resolution, samples, output format, render |
| Curves | 10 | Bezier/NURBS/path, 3D text, convert, reverse, handle types, cyclic, subdivide |
| Sculpting | 8 | Brushes, remesh, multires, symmetry, dynamic topology |
| UV Mapping | 4 | Smart project, unwrap, projection, pack islands |
| Physics | 9 | Rigid body, cloth, fluid, particles (velocity, rendering, delete), bake |
| Geometry Nodes | 5 | Create node trees, add/connect nodes, set inputs |
| Armature | 6 | Bones, constraints, auto weights, pose |
| Grease Pencil | 5 | Create GP objects, layers, strokes with pressure/strength |
| Collections | 4 | Create, move objects, visibility, delete |
| File I/O | 5 | Import/export (FBX, OBJ, glTF, USD, STL...), save/open |
| Viewport | 3 | Shading mode, overlays, focus on object |
| Screenshot | 1 | Render viewport to file |
| Code Exec | 1 | Execute Python code in Blender |
</details>
Architecture
AI Assistant <--stdio/MCP--> blend-ai server <--TCP socket--> Blender addon <--bpy--> Blender
<details> <summary><strong>How it works</strong></summary>
- MCP Server (
src/blend_ai/): Python process using themcpSDK. Exposes tools, resources, and prompts over stdio. Validates all inputs before forwarding to Blender. - Blender Addon (
addon/): Runs a TCP socket server inside Blender on a background thread. Commands are queued and executed on the main thread viabpy.app.timersto respect Blender's threading model. - Render Guard: Tracks render state via
bpy.app.handlers. During renders, the server immediately returns a "busy" status instead of queueing commands that would time out. The MCP client auto-retries with backoff until the render completes. - Protocol: Length-prefixed JSON messages over TCP. Each message is a 4-byte big-endian length header followed by a UTF-8 JSON payload.
</details>
Privacy & Security
<details> <summary><strong>Privacy</strong></summary>
- Zero telemetry — blend-ai collects no usage data, sends no analytics, and makes no network requests beyond the local TCP connection to Blender on
127.0.0.1:9876. - Fully local — all communication stays on your machine. No cloud services, no external APIs, no phone-home behavior.
- Open source — the entire codebase is auditable. What you see is what runs.
</details>
<details> <summary><strong>Security</strong></summary>
- Localhost only: The TCP socket binds to
127.0.0.1— never exposed to the network. - Input validation: All inputs pass through validators before reaching Blender — name sanitization, path traversal prevention, numeric range checks, enum allowlists.
- File safety: Import operations disable
use_scripts_auto_executeto prevent script injection from imported files. File extensions are checked against allowlists. - Command allowlist: The addon dispatcher only processes explicitly registered commands. Unknown commands are rejected.
- Shader node allowlist: Only ~65 known shader node types can be created — prevents arbitrary type injection.
</details>
Limitations
<details> <summary><strong>Known limitations</strong></summary>
- Blender must be running: The MCP server communicates with Blender over TCP. Blender must be open with the addon enabled and server started.
- Single connection: The addon accepts one client connection at a time. Multiple AI assistants cannot control the same Blender instance simultaneously.
- Selection is all-or-nothing: Most mesh editing tools operate on all geometry. Fine-grained vertex/edge/face selection by index is not yet exposed, though
select_linkedis available. - Sculpt strokes cannot be simulated: You can configure brushes, symmetry, dyntopo, and remeshing, but actual brush strokes (
bpy.ops.sculpt.brush_stroke) are not yet exposed — sculpting still requires manual interaction. - Node graphs require sequential calls: Both shader node trees and geometry node trees must be built one node/connection at a time. There's no "create full graph from description" tool.
- No undo integration: Operations appear in Blender's undo history individually but there's no MCP-level undo/redo or transaction grouping.
- Viewport capture: Requires a visible 3D viewport. Headless Blender may not support viewport screenshots.
- No real-time feedback: The MCP protocol is request/response. There's no streaming of viewport updates or render progress.
</details>
Development
# Install with dev dependencies
uv pip install -e ".[dev]"
# Run tests (882 tests)
uv run --extra dev pytest
# Run tests with coverage
uv run --extra dev pytest --cov=blend_ai
# Lint
ruff check src/ tests/
# Format
ruff format src/ tests/
<details> <summary><strong>Project structure</strong></summary>
blend-ai/
├── src/blend_ai/ # MCP server
│ ├── server.py # FastMCP entry point
│ ├── connection.py # TCP client to Blender (with busy-retry)
│ ├── validators.py # Input validation
│ ├── tools/ # 24 tool modules (161 tools)
│ ├── resources/ # MCP resources (scene, objects, materials)
│ └── prompts/ # Workflow prompt templates
├── addon/ # Blender addon (zero external deps)
│ ├── __init__.py # bl_info + register/unregister
│ ├── server.py # TCP socket server
│ ├── dispatcher.py # Command routing + allowlist
│ ├── thread_safety.py # Main-thread execution queue
│ ├── render_guard.py # Render state tracking
│ ├── ui_panel.py # N-panel UI (start/stop)
│ └── handlers/ # 24 handler modules
└── tests/ # 882 unit tests
</details>
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.