Allows AI agents to query and analyze data stored in Amazon Redshift through a governed semantic layer.
Enables performing analytical queries against ClickHouse databases via defined metrics and semantic models.
Connects to Databricks to execute governed analytical queries against data models using the semantic layer.
Supports querying DuckDB and MotherDuck instances, providing agents with access to analytical data processing.
Provides an interface for AI agents to query Google BigQuery datasets using consistent metric definitions.
Integrates with PostgreSQL databases to execute governed queries against cubes, views, and metrics.
Allows AI agents to query Snowflake data warehouses using a semantic layer for consistent results across all consumers.
Enables connection to Supabase-hosted PostgreSQL databases to query defined semantic data models.
Bonnard is an agent-native semantic layer — one set of metric definitions, every consumer (AI agents, apps, dashboards) gets the same governed answer. This repo is the self-hosted Docker deployment: run Bonnard on your own infrastructure with no cloud account needed.
Quick Start
# 1. Scaffold project
npx @bonnard/cli init --self-hosted
# 2. Configure your data source
# Edit .env with your database credentials
# 3. Start the server
docker compose up -d
# 4. Define your semantic layer
# Add cube/view YAML files to bonnard/cubes/ and bonnard/views/
# 5. Deploy models to the server
bon deploy
# 6. Verify your semantic layer
bon schema
# 7. Connect AI agents
bon mcpRequires Node.js 20+ and Docker.
What's Included
MCP server — AI agents query your semantic layer over the Model Context Protocol
Cube semantic layer — SQL-based metric definitions with caching, access control, and multi-database support
Cube Store — pre-aggregation cache for fast analytical queries
Admin UI — browse deployed models, views, and measures at
http://localhost:3000Deploy API — push model updates via
bon deploywithout restarting containersHealth endpoint —
GET /healthfor uptime monitoring
Connecting AI Agents
Run bon mcp to see connection config for your setup. Examples below.
Claude Desktop / Cursor
{
"mcpServers": {
"bonnard": {
"url": "https://bonnard.example.com/mcp",
"headers": {
"Authorization": "Bearer your-secret-token-here"
}
}
}
}Claude Code
{
"mcpServers": {
"bonnard": {
"type": "url",
"url": "https://bonnard.example.com/mcp",
"headers": {
"Authorization": "Bearer your-secret-token-here"
}
}
}
}CrewAI (Python)
from crewai import MCPServerAdapter
mcp = MCPServerAdapter(
url="https://bonnard.example.com/mcp",
transport="streamable-http",
headers={"Authorization": "Bearer your-secret-token-here"}
)Production Deployment
Authentication
Protect your endpoints by setting ADMIN_TOKEN in .env:
ADMIN_TOKEN=your-secret-token-hereAll API and MCP endpoints will require Authorization: Bearer <token>. The /health endpoint remains open for monitoring.
Restart after changing .env:
docker compose up -dTLS with Caddy
Caddy provides automatic HTTPS via Let's Encrypt.
Create a Caddyfile next to your docker-compose.yml:
bonnard.example.com {
reverse_proxy localhost:3000
}Add Caddy to your docker-compose.yml:
caddy:
image: caddy:2
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
restart: unless-stoppedAdd the volume at the top level:
volumes:
models: {}
caddy_data: {}Then remove the Bonnard port mapping (ports: - "3000:3000") since Caddy handles external traffic.
Deploy to a VM
# Copy project files to your server
scp -r . user@your-server:~/bonnard/
# SSH in and start
ssh user@your-server
cd ~/bonnard
docker compose up -dConfiguration
Variable | Description | Default |
| Database driver ( |
|
| Database connection settings (host, port, name, user, pass) | — |
| Comma-separated list for multi-datasource setups |
|
| HS256 secret for Cube JWT auth (auto-generated by | — |
| Bearer token for API/MCP authentication | — (open) |
| Cube API port |
|
| Bonnard server port |
|
| Allowed CORS origins |
|
| Cube Docker image tag |
|
| Bonnard Docker image tag |
|
See .env.example for a full annotated configuration file.
Architecture
Service | Image | Role |
|
| Semantic layer engine — executes queries against your warehouse |
|
| Pre-aggregation cache — stores materialized results for fast reads |
|
| MCP server, admin UI, deploy API — the interface layer for agents and tools |
All three services communicate over an internal Docker network. Only bonnard (port 3000) and optionally cube (port 4000) are exposed externally.
Monitoring
# Health check
curl http://localhost:3000/health
# View logs
docker compose logs -f
# View active MCP sessions
curl -H "Authorization: Bearer <token>" http://localhost:3000/api/mcp/sessionsDeploying Schema Updates
From your development machine:
bon deployThis pushes your cube/view YAML files to the running server. No restart needed — Cube picks up changes automatically.
Pinning Versions
Control image versions via .env:
CUBE_VERSION=v1.6
BONNARD_VERSION=latestSupported Data Sources
Warehouses: Snowflake, Google BigQuery, Databricks, PostgreSQL (including Supabase, Neon, RDS), Amazon Redshift, DuckDB (including MotherDuck), ClickHouse
See the full documentation for connection guides.
Ecosystem
@bonnard/cli — scaffold projects, deploy models, connect agents
@bonnard/sdk — query the semantic layer from JavaScript/TypeScript
@bonnard/react — React chart components and dashboard viewer
Community
Discord: ask questions, share feedback, connect with the team
GitHub Issues: bug reports and feature requests
LinkedIn: follow for updates
Website: learn more about Bonnard
License
This server cannot be installed
Resources
Looking for Admin?
Admins can modify the Dockerfile, update the server description, and track usage metrics. If you are the server author, to access the admin panel.