Bernstein - Multi-agent orchestration
Orchestrates multiple AI coding agents declaratively to automate software development workflows for engineering teams.
README
<div align="center">
<picture> <source media="(prefers-color-scheme: dark)" srcset="docs/assets/logo-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="docs/assets/logo-light.svg"> <img alt="Bernstein" src="docs/assets/logo-light.svg" width="340"> </picture>
<br>
Orchestrate any AI coding agent. Any model. One command.
<picture> <source media="(prefers-color-scheme: dark)" srcset="docs/assets/tui.svg"> <source media="(prefers-color-scheme: light)" srcset="docs/assets/tui.svg"> <img alt="Bernstein TUI — live task dashboard" src="docs/assets/tui.svg" width="700"> </picture>
Documentation · Getting Started · Glossary · Limitations
Wall of fame
"lol, good luck, keep vibecoding shit that you have no idea about xD" — PeaceFirePL, Reddit
</div>
Bernstein takes a goal, breaks it into tasks, assigns them to AI coding agents running in parallel, verifies the output, and merges the results. You come back to working code, passing tests, and a clean git history.
No framework to learn. No vendor lock-in. Agents are interchangeable workers — swap any agent, any model, any provider. The orchestrator itself is deterministic Python code. Zero LLM tokens on scheduling.
pip install bernstein
bernstein -g "Add JWT auth with refresh tokens, tests, and API docs"
Also available via pipx, uv tool install, brew, dnf copr, and npx bernstein-orchestrator. See install options.
Supported agents
Bernstein auto-discovers installed CLI agents. Mix them in the same run — cheap local models for boilerplate, heavy cloud models for architecture.
| Agent | Models | Install |
|---|---|---|
| Claude Code | opus 4.6, sonnet 4.6, haiku 4.5 | npm install -g @anthropic-ai/claude-code |
| Codex CLI | gpt-5.4, o3, o4-mini | npm install -g @openai/codex |
| Gemini CLI | gemini-3-pro, 3-flash | npm install -g @google/gemini-cli |
| Cursor | sonnet 4.6, opus 4.6, gpt-5.4 | Cursor app |
| Aider | Any OpenAI/Anthropic-compatible | pip install aider-chat |
| Ollama + Aider | Local models (offline) | brew install ollama |
| Amp, Cody, Continue.dev, Goose, Kilo, Kiro, OpenCode, Qwen, Roo Code, Tabby | Various | See docs |
| Generic | Any CLI with --prompt |
Built-in |
[!TIP] Run
bernstein --headlessfor CI pipelines — no TUI, structured JSON output, non-zero exit on failure.
Quick start
cd your-project
bernstein init # creates .sdd/ workspace + bernstein.yaml
bernstein -g "Add rate limiting" # agents spawn, work in parallel, verify, exit
bernstein live # watch progress in the TUI dashboard
bernstein stop # graceful shutdown with drain
For multi-stage projects, define a YAML plan:
bernstein run plan.yaml # skips LLM planning, goes straight to execution
bernstein run --dry-run plan.yaml # preview tasks and estimated cost
How it works
- Decompose — the manager breaks your goal into tasks with roles, owned files, and completion signals
- Spawn — agents start in isolated git worktrees, one per task. Main branch stays clean.
- Verify — the janitor checks concrete signals: tests pass, files exist, lint clean, types correct
- Merge — verified work lands in main. Failed tasks get retried or routed to a different model.
The orchestrator is a Python scheduler, not an LLM. Scheduling decisions are deterministic, auditable, and reproducible.
Capabilities
Core orchestration — parallel execution, git worktree isolation, janitor verification, quality gates (lint + types + PII scan), cross-model code review, circuit breaker for misbehaving agents, token growth monitoring with auto-intervention.
Intelligence — contextual bandit router learns optimal model/effort pairs over time. Knowledge graph for codebase impact analysis. Semantic caching saves tokens on repeated patterns. Cost anomaly detection with Z-score flagging.
Enterprise — HMAC-chained tamper-evident audit logs. Policy limits with fail-open defaults and multi-tenant isolation. PII output gating. OAuth 2.0 PKCE. SSO/SAML/OIDC auth. WAL crash recovery — no silent data loss.
Observability — Prometheus /metrics, OTel exporter presets, Grafana dashboards. Per-model cost tracking (bernstein cost). Terminal TUI and web dashboard. Agent process visibility in ps.
Ecosystem — MCP server mode, A2A protocol support, GitHub App integration, pluggy-based plugin system, multi-repo workspaces, cluster mode for distributed execution, self-evolution via --evolve.
Full feature matrix: FEATURE_MATRIX.md
How it compares
| Bernstein | CrewAI | AutoGen | LangGraph | |
|---|---|---|---|---|
| Orchestrator | Deterministic code | LLM-driven | LLM-driven | Graph + LLM |
| Works with | Any CLI agent (18+) | Python SDK classes | Python agents | LangChain nodes |
| Git isolation | Worktrees per agent | No | No | No |
| Verification | Janitor + quality gates | No | No | Conditional edges |
| Cost tracking | Built-in | No | No | No |
| State model | File-based (.sdd/) | In-memory | In-memory | Checkpointer |
| Self-evolution | Built-in | No | No | No |
Full comparison pages with detailed feature matrices.
Monitoring
bernstein live # TUI dashboard
bernstein dashboard # web dashboard
bernstein status # task summary
bernstein ps # running agents
bernstein cost # spend by model/task
bernstein doctor # pre-flight checks
bernstein recap # post-run summary
bernstein trace <ID> # agent decision trace
bernstein explain <cmd> # detailed help with examples
bernstein dry-run # preview tasks without executing
bernstein aliases # show command shortcuts
bernstein config-path # show config file locations
bernstein init-wizard # interactive project setup
Install
| Method | Command |
|---|---|
| pip | pip install bernstein |
| pipx | pipx install bernstein |
| uv | uv tool install bernstein |
| Homebrew | brew tap chernistry/bernstein && brew install bernstein |
| Fedora / RHEL | sudo dnf copr enable alexchernysh/bernstein && sudo dnf install bernstein |
| npm (wrapper) | npx bernstein-orchestrator |
Editor extensions: VS Marketplace · Open VSX
Contributing
PRs welcome. See CONTRIBUTING.md for setup and code style.
Support
If Bernstein saves you time: GitHub Sponsors · Open Collective
License
"To achieve great things, two things are needed: a plan and not quite enough time." — Leonard Bernstein
<!-- mcp-name: io.github.chernistry/bernstein -->
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.