Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@databricks-mcpShow me the schema and recent history for the sales_data table"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
databricks-mcp
Read this in other languages:
A read-only MCP (Model Context Protocol) server for Databricks, enabling Claude to query Databricks SQL, browse metadata, and monitor jobs/pipelines.
Features
SQL Queries: Execute SELECT, SHOW, DESCRIBE queries (write operations blocked)
Metadata Browsing: List catalogs, schemas, tables, and search tables
Delta Lake: View table history, details, and grants
Jobs & Pipelines: List and monitor Databricks Jobs and DLT Pipelines
Query History: Browse SQL query history with filters
Cluster Metrics: Monitor CPU, memory, network usage from system tables
Installation
Prerequisites
Python 3.13+
uv package manager
Databricks workspace with SQL Warehouse
Setup
# Clone the repository
git clone https://github.com/ChrisChoTW/databricks-mcp.git
cd databricks-mcp
# Install dependencies
uv sync
# Create .env file
cp .env.example .envConfiguration
Edit .env with your Databricks credentials:
DATABRICKS_SERVER_HOSTNAME=your-workspace.cloud.databricks.com
DATABRICKS_HTTP_PATH=/sql/1.0/warehouses/your-warehouse-id
DATABRICKS_TOKEN=your-personal-access-tokenUsage
With Claude Code
Add to your Claude Code MCP configuration (~/.claude.json):
{
"mcpServers": {
"databricks-sql": {
"type": "stdio",
"command": "uv",
"args": [
"--directory",
"/path/to/databricks-mcp",
"run",
"python",
"server.py"
],
"env": {
"DATABRICKS_SERVER_HOSTNAME": "your-workspace.cloud.databricks.com",
"DATABRICKS_HTTP_PATH": "/sql/1.0/warehouses/your-warehouse-id",
"DATABRICKS_TOKEN": "your-token"
}
}
}
}Standalone
uv run python server.pyAvailable Tools
SQL & Metadata
Tool | Description |
| Execute SQL queries (read-only) |
| List all catalogs |
| List schemas in a catalog |
| List tables in a schema |
| Get table structure (DESCRIBE EXTENDED) |
| Search tables by name |
Delta Lake
Tool | Description |
| View Delta table change history |
| View Delta table details |
| View object permissions |
| List Unity Catalog volumes |
Jobs & Pipelines
Tool | Description |
| List Databricks Jobs |
| Get job details |
| List job run history |
| Get run details |
| List DLT Pipelines |
| Get pipeline status |
Compute & Monitoring
Tool | Description |
| List SQL query history |
| List SQL Warehouses |
| List clusters |
| Get cluster CPU/memory metrics |
| Get cluster events |
Project Structure
databricks-mcp/
├── server.py # Entry point
├── core.py # Shared connections and MCP instance
└── tools/
├── query.py # SQL queries and metadata
├── delta.py # Delta Lake and permissions
├── jobs.py # Jobs management
├── pipelines.py # DLT Pipelines
├── compute.py # Clusters and query history
└── metrics.py # Cluster metricsSecurity
This server is read-only by design:
❌ INSERT, UPDATE, DELETE, DROP, TRUNCATE, MERGE, COPY blocked
✅ SELECT, SHOW, DESCRIBE, CREATE VIEW allowed
Credentials are passed via environment variables (never hardcoded)
License
MIT
Contributing
Issues and pull requests are welcome!
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.