Bernstein - Multi-agent orchestration
Provides GitHub App integration for connecting with GitHub repositories, enabling automated workflows and repository management within the orchestration ecosystem.
Provides Grafana dashboards for observability and monitoring, allowing visualization of orchestration metrics, task execution status, and cost tracking data.
Orchestrates Ollama CLI agent for running local AI models offline, enabling parallel task execution with local models without cloud dependencies.
Exposes Prometheus /metrics endpoint for monitoring orchestration performance, agent health, task metrics, and cost tracking with built-in metrics export.
Orchestrate any AI coding agent. Any model. One command.
Documentation · Getting Started · Glossary · Limitations
Wall of fame
"lol, good luck, keep vibecoding shit that you have no idea about xD" — PeaceFirePL, Reddit
Bernstein takes a goal, breaks it into tasks, assigns them to AI coding agents running in parallel, verifies the output, and merges the results. You come back to working code, passing tests, and a clean git history.
No framework to learn. No vendor lock-in. Agents are interchangeable workers — swap any agent, any model, any provider. The orchestrator itself is deterministic Python code. Zero LLM tokens on scheduling.
pip install bernstein
bernstein -g "Add JWT auth with refresh tokens, tests, and API docs"Also available via pipx, uv tool install, brew, dnf copr, and npx bernstein-orchestrator. See install options.
Supported agents
Bernstein auto-discovers installed CLI agents. Mix them in the same run — cheap local models for boilerplate, heavy cloud models for architecture.
Agent | Models | Install |
opus 4.6, sonnet 4.6, haiku 4.5 |
| |
gpt-5.4, o3, o4-mini |
| |
gemini-3-pro, 3-flash |
| |
sonnet 4.6, opus 4.6, gpt-5.4 | ||
Any OpenAI/Anthropic-compatible |
| |
Ollama + Aider | Local models (offline) |
|
Amp, Cody, Continue.dev, Goose, Kilo, Kiro, OpenCode, Qwen, Roo Code, Tabby | Various | See docs |
Generic | Any CLI with | Built-in |
Runbernstein --headless for CI pipelines — no TUI, structured JSON output, non-zero exit on failure.
Quick start
cd your-project
bernstein init # creates .sdd/ workspace + bernstein.yaml
bernstein -g "Add rate limiting" # agents spawn, work in parallel, verify, exit
bernstein live # watch progress in the TUI dashboard
bernstein stop # graceful shutdown with drainFor multi-stage projects, define a YAML plan:
bernstein run plan.yaml # skips LLM planning, goes straight to execution
bernstein run --dry-run plan.yaml # preview tasks and estimated costHow it works
Decompose — the manager breaks your goal into tasks with roles, owned files, and completion signals
Spawn — agents start in isolated git worktrees, one per task. Main branch stays clean.
Verify — the janitor checks concrete signals: tests pass, files exist, lint clean, types correct
Merge — verified work lands in main. Failed tasks get retried or routed to a different model.
The orchestrator is a Python scheduler, not an LLM. Scheduling decisions are deterministic, auditable, and reproducible.
Capabilities
Core orchestration — parallel execution, git worktree isolation, janitor verification, quality gates (lint + types + PII scan), cross-model code review, circuit breaker for misbehaving agents, token growth monitoring with auto-intervention.
Intelligence — contextual bandit router learns optimal model/effort pairs over time. Knowledge graph for codebase impact analysis. Semantic caching saves tokens on repeated patterns. Cost anomaly detection with Z-score flagging.
Enterprise — HMAC-chained tamper-evident audit logs. Policy limits with fail-open defaults and multi-tenant isolation. PII output gating. OAuth 2.0 PKCE. SSO/SAML/OIDC auth. WAL crash recovery — no silent data loss.
Observability — Prometheus /metrics, OTel exporter presets, Grafana dashboards. Per-model cost tracking (bernstein cost). Terminal TUI and web dashboard. Agent process visibility in ps.
Ecosystem — MCP server mode, A2A protocol support, GitHub App integration, pluggy-based plugin system, multi-repo workspaces, cluster mode for distributed execution, self-evolution via --evolve.
Full feature matrix: FEATURE_MATRIX.md
How it compares
Bernstein | CrewAI | AutoGen | LangGraph | |
Orchestrator | Deterministic code | LLM-driven | LLM-driven | Graph + LLM |
Works with | Any CLI agent (18+) | Python SDK classes | Python agents | LangChain nodes |
Git isolation | Worktrees per agent | No | No | No |
Verification | Janitor + quality gates | No | No | Conditional edges |
Cost tracking | Built-in | No | No | No |
State model | File-based (.sdd/) | In-memory | In-memory | Checkpointer |
Self-evolution | Built-in | No | No | No |
Full comparison pages with detailed feature matrices.
Monitoring
bernstein live # TUI dashboard
bernstein dashboard # web dashboard
bernstein status # task summary
bernstein ps # running agents
bernstein cost # spend by model/task
bernstein doctor # pre-flight checks
bernstein recap # post-run summary
bernstein trace <ID> # agent decision trace
bernstein explain <cmd> # detailed help with examples
bernstein dry-run # preview tasks without executing
bernstein aliases # show command shortcuts
bernstein config-path # show config file locations
bernstein init-wizard # interactive project setupInstall
Method | Command |
pip |
|
pipx |
|
uv |
|
Homebrew |
|
Fedora / RHEL |
|
npm (wrapper) |
|
Editor extensions: VS Marketplace · Open VSX
Contributing
PRs welcome. See CONTRIBUTING.md for setup and code style.
Support
If Bernstein saves you time: GitHub Sponsors · Open Collective
License
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/chernistry/bernstein'
If you have feedback or need assistance with the MCP directory API, please join our Discord server