awesome-databricks-mcp
Host Model Context Protocol (MCP) prompts and tools on Databricks Apps, enabling AI assistants like Claude to interact with your Databricks workspace through a secure, authenticated interface.
What is this?
This template lets you create an MCP server that runs on Databricks Apps. You can:
๐ Add prompts as simple markdown files in the prompts/ folder
๐ ๏ธ Create tools as Python functions that leverage Databricks SDK
๐ Authenticate securely with OAuth through Databricks Apps
๐ Deploy instantly to make your MCP server accessible to Claude
๐ฅ๏ธ Web Interface with a modern React TypeScript frontend for MCP discovery
๐งช Comprehensive Testing with automated MCP validation tools
๐ CI/CD Pipeline with automated testing, security scanning, and deployment
๐ Advanced Dashboard Tools for building comprehensive Lakeview dashboards
๐ก๏ธ Security Features with comprehensive injection attack prevention and input validation
Think of it as a bridge between Claude and your Databricks workspace - you define what Claude can see and do, and this server handles the rest.
Related MCP server: Databricks MCP Server
How it Works
Architecture Overview
โโโโโโโโโโโโโโโ MCP Protocol โโโโโโโโโโโโโโโโโโโโ OAuth โโโโโโโโโโโโโโโโโโโ
โ Claude โ โโโโโโโโโโโโโโโโโโโโโโโบ โ dba-mcp-proxy โ โโโโโโโโโโโโโโโโโโโโบ โ Databricks App โ
โ CLI โ (stdio/JSON-RPC) โ (local process) โ (HTTPS/SSE) โ (MCP Server) โ
โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โฒ โ
โ โผ
โโโโโโโโโโโ Databricks OAuth โโโโโโโบ Workspace APIs
graph TB
subgraph "MCP Client Layer"
A[Claude CLI]
B[Claude Code]
C[Other MCP Clients]
end
subgraph "MCP Proxy Layer"
D[dba-mcp-proxy]
E[OAuth Authentication]
F[Token Management]
end
subgraph "Databricks Apps Layer"
G[FastAPI App]
H[MCP Server]
I[React Frontend]
end
subgraph "Tools Layer"
J[Core Tools]
K[SQL Operations]
L[Unity Catalog]
M[Dashboard Tools]
N[Widget Tools]
O[Jobs & Pipelines]
P[Data Management]
Q[Governance]
end
subgraph "Databricks Services"
R[SQL Warehouses]
S[Unity Catalog]
T[Jobs & DLT]
U[DBFS & Volumes]
V[Workspace Files]
W[ML & Models]
end
A --> D
B --> D
C --> D
D --> E
E --> F
F --> G
G --> H
H --> J
H --> K
H --> L
H --> M
H --> N
H --> O
H --> P
H --> Q
J --> R
K --> R
L --> S
M --> S
N --> S
O --> T
P --> U
Q --> V
G --> I
Components
MCP Server (server/app.py): A FastAPI app with integrated MCP server that:
Dynamically loads prompts from prompts/*.md files
Exposes Python functions as MCP tools via modular tool system
Handles both HTTP requests and MCP protocol over Server-Sent Events
Uses FastMCP framework for seamless MCP integration
React Frontend (client/): A modern TypeScript React application that:
Provides a web interface for MCP discovery and testing
Shows available prompts, tools, and MCP configuration
Includes copy-paste setup commands for Claude integration
Built with TailwindCSS, Radix UI, and modern React patterns
Uses Vite for fast development and building
Prompts (prompts/): MCP-compliant markdown files with YAML frontmatter:
YAML frontmatter: Required metadata defining prompt name, description, and arguments
Argument validation: Built-in validation for required arguments and data types
Placeholder substitution: Automatic replacement of ${argument} placeholders
Security: Input sanitization and validation to prevent injection attacks
Modular Tools System (server/tools/): Organized tool modules that:
Break down functionality into logical, manageable components
Provide 100+ tools across 9 specialized modules
Enable better maintainability and collaboration
Support easy addition of new tools
Local Proxy (dba_mcp_proxy/): Authenticates and proxies MCP requests:
Handles Databricks OAuth authentication automatically
Translates between Claude's stdio protocol and HTTP/SSE
Works with both local development and deployed apps
Comprehensive Testing Suite (claude_scripts/): Automated validation tools:
Local and remote MCP server testing
MCP protocol compliance validation
OAuth authentication flow testing
Web-based MCP Inspector for interactive testing
Security Features (server/tools/security.py): Comprehensive security validation:
SQL injection prevention with pattern detection
HTML/script injection prevention and sanitization
Dataset and field name validation
Widget configuration security validation
Input sanitization and escaping
๐ฌ Demo
This 5-minute video shows you how to set up and use the MCP server with Claude and/or Cursor: https://www.youtube.com/watch?v=_yPtm2iH04o

Quick Start
Create Your Own MCP Server
Step 1: Install Databricks CLI and Set Up GitHub SSH Access
Before you begin, make sure you have the Databricks CLI and GitHub SSH access configured.
Install Databricks CLI:
macOS:
# Using Homebrew (recommended)
brew install databricks/tap/databricks
# Using pip
pip install databricks-cli
# Verify installation
databricks --version
Windows:
# Using pip
pip install databricks-cli
# Or download from official releases
# https://github.com/databricks/databricks-cli/releases
# Download the .exe file and add to PATH
# Verify installation
databricks --version
Linux:
# Using pip
pip install databricks-cli
# Using apt (Ubuntu/Debian)
curl -fsSL https://databricks.com/install-cli.sh | bash
# Using yum (RHEL/CentOS)
curl -fsSL https://databricks.com/install-cli.sh | bash
# Verify installation
databricks --version
Set Up GitHub SSH Access:
macOS:
# Generate SSH key (if you don't have one)
ssh-keygen -t ed25519 -C "your-email@example.com"
# Start ssh-agent
eval "$(ssh-agent -s)"
# Add SSH key to ssh-agent
ssh-add ~/.ssh/id_ed25519
# Copy public key to clipboard
pbcopy < ~/.ssh/id_ed25519.pub
# Add to GitHub: https://github.com/settings/keys
# Click "New SSH key" and paste the copied key
Windows:
# Generate SSH key (if you don't have one)
ssh-keygen -t ed25519 -C "your-email@example.com"
# Start ssh-agent (PowerShell as Administrator)
Set-Service ssh-agent -StartupType Automatic
Start-Service ssh-agent
# Add SSH key to ssh-agent
ssh-add ~/.ssh/id_ed25519
# Copy public key to clipboard
Get-Content ~/.ssh/id_ed25519.pub | Set-Clipboard
# Add to GitHub: https://github.com/settings/keys
# Click "New SSH key" and paste the copied key
Linux:
# Generate SSH key (if you don't have one)
ssh-keygen -t ed25519 -C "your-email@example.com"
# Start ssh-agent
eval "$(ssh-agent -s)"
# Add SSH key to ssh-agent
ssh-add ~/.ssh/id_ed25519
# Copy public key to clipboard (if xclip is available)
xclip -sel clip < ~/.ssh/id_ed25519.pub
# Or display the key to copy manually
cat ~/.ssh/id_ed25519.pub
# Add to GitHub: https://github.com/settings/keys
# Click "New SSH key" and paste the copied key
Test GitHub SSH Connection:
# Test the connection
ssh -T git@github.com
# You should see: "Hi username! You've successfully authenticated..."
Configure Git with SSH:
# Set your Git username and email
git config --global user.name "Your Name"
git config --global user.email "your-email@example.com"
# Verify SSH is working by cloning a private repo
git clone git@github.com:username/private-repo.git
Step 2: Clone the Repo
# Clone your new repository
git clone https://github.com/PulkitXChadha/awesome-databricks-mcp.git
cd awesome-databricks-mcp
# Run the interactive setup
./setup.sh
This will:
Configure Databricks authentication
Set your MCP server name
Install all dependencies (Python + Node.js)
Create your .env.local file
Step 3: Deploy with Claude
In Claude Code, run:
This will:
Deploy your MCP server to Databricks Apps
Configure the MCP integration
Show you available prompts and tools
Then restart Claude Code to use your new MCP server.
Add to Claude CLI
After deployment, add your MCP server to Claude:
# Set your Databricks configuration
export DATABRICKS_HOST="https://your-workspace.cloud.databricks.com"
export DATABRICKS_APP_URL="https://your-app.databricksapps.com" # Get this from ./app_status.sh
export SERVER_NAME="your-server-name" # This comes from config.yaml (set during ./setup.sh)
# Add your MCP server to Claude (user-scoped)
claude mcp add $SERVER_NAME --scope user -- \
uvx --from git+https://git@github.com/YOUR-USERNAME/your-repo.git dba-mcp-proxy \
--databricks-host $DATABRICKS_HOST \
--databricks-app-url $DATABRICKS_APP_URL
Local Development
# Clone and setup
git clone <your-repo>
cd <your-repo>
./setup.sh
# Start dev server (both backend and frontend)
./watch.sh
# Set your configuration for local testing
export DATABRICKS_HOST="https://your-workspace.cloud.databricks.com"
export DATABRICKS_APP_URL="http://localhost:8000" # Local dev server
# Add to Claude for local testing
claude mcp add databricks-mcp-local --scope local -- \
uvx --from git+https://git@github.com/YOUR-ORG/YOUR-REPO.git dba-mcp-proxy \
--databricks-host $DATABRICKS_HOST \
--databricks-app-url $DATABRICKS_APP_URL
Running Locally
Prerequisites
Before running the MCP server locally, ensure you have:
Python 3.11+ and Node.js 18+ installed
Databricks CLI configured with databricks auth login
Git for cloning the repository
uv package manager (recommended) or pip for Python dependencies
bun (recommended) or npm for Node.js dependencies
Step-by-Step Local Setup
1. Clone and Configure
# Clone your repository
git clone https://github.com/PulkitXChadha/awesome-databricks-mcp.git
cd awesome-databricks-mcp
# Run the interactive setup script
./setup.sh
The setup script will:
Install Python dependencies using uv or pip
Install Node.js dependencies using bun or npm
Configure your Databricks workspace settings
Create a .env.local file with your configuration
2. Start the Development Server
# Start both backend (FastAPI) and frontend (React) servers
./watch.sh
This command starts:
Backend: FastAPI server on http://localhost:8000
Frontend: React development server on http://localhost:3000
File watching: Automatic reloading when files change
3. Verify Local Setup
Open your browser and navigate to:
4. Test with Claude CLI
# Set environment variables for local testing
export DATABRICKS_HOST="https://your-workspace.cloud.databricks.com"
export DATABRICKS_APP_URL="http://localhost:8000"
# Add the local MCP server to Claude
claude mcp add databricks-mcp-local --scope local -- \
uvx --from git+https://git@github.com/PulkitXChadha/awesome-databricks-mcp.git dba-mcp-proxy \
--databricks-host $DATABRICKS_HOST \
--databricks-app-url $DATABRICKS_APP_URL
# Test the connection
echo "What MCP prompts are available from databricks-mcp-local?" | claude
Development Workflow
Making Changes
Edit prompts: Modify files in prompts/ directory
Edit tools: Update functions in appropriate modules under server/tools/
Edit frontend: Modify React components in client/src/
Edit backend: Update FastAPI routes in server/
All changes automatically reload thanks to the file watchers in ./watch.sh.
Creating New MCP Prompts
All prompts require YAML frontmatter for MCP compliance. Create a new markdown file in prompts/:
---
name: your_prompt_name
description: Brief description of what the prompt does
arguments:
- name: warehouse_id
description: SQL Warehouse ID for query execution
required: true
schema:
type: string
pattern: "^[a-f0-9]{16}$"
- name: catalog
description: Unity Catalog name
required: false
schema:
type: string
mutually_exclusive:
- [option1, option2] # Optional: Define mutually exclusive arguments
---
# Your Prompt Title
## Configuration
- **Warehouse ID**: ${warehouse_id}
- **Catalog**: ${catalog}
Your prompt content here. Use ${argument_name} for placeholder substitution.
The YAML frontmatter provides:
Structured documentation: Clear definition of expected arguments
Future MCP compliance: Prepared for when FastMCP adds full argument support
Schema definitions: JSON Schema ready for validation
Argument metadata: Required/optional flags and descriptions
Note: FastMCP's current version doesn't support runtime argument validation in prompts,
but the YAML metadata documents the expected interface for future compatibility.
Testing Changes
# Test local MCP server directly
./claude_scripts/test_local_mcp_curl.sh
# Test with MCP proxy
./claude_scripts/test_local_mcp_proxy.sh
# Use the web-based MCP Inspector
./claude_scripts/inspect_local_mcp.sh
Debugging
Backend logs: Check terminal output from ./watch.sh
Frontend logs: Check browser console and terminal output
MCP logs: Monitor the /mcp/ endpoint responses
Database queries: Check Databricks workspace logs
Local vs Production Differences
Feature | Local Development | Production |
Authentication | Databricks CLI token | OAuth via Databricks Apps |
URL | http://localhost:8000
| https://your-app.databricksapps.com
|
HTTPS | No (HTTP only) | Yes (HTTPS required) |
File watching | Yes (auto-reload) | No |
Debug mode | Yes | No |
Logs | Terminal output | Databricks Apps logs |
Troubleshooting Local Issues
Common Problems
Port conflicts:
# Check what's using port 8000
lsof -i :8000
# Kill process if needed
kill -9 <PID>
Dependencies not found:
# Reinstall Python dependencies
uv sync
# Reinstall Node.js dependencies
cd client && bun install
Databricks authentication:
# Refresh Databricks CLI credentials
databricks auth login
# Verify configuration
databricks config get
MCP connection issues:
# Test MCP endpoint directly
curl http://localhost:8000/mcp/
# Check Claude MCP configuration
claude mcp list
Performance Tips
Use uv instead of pip for faster Python dependency management
Use bun instead of npm for faster Node.js dependency management
The ./watch.sh script uses uvicorn --reload for fast backend development
Frontend uses Vite for fast hot module replacement
Deployment
# Deploy to Databricks Apps
./deploy.sh
# Check status and get your app URL
./app_status.sh
Your MCP server will be available at https://your-app.databricksapps.com/mcp/
The app_status.sh script will show your deployed app URL, which you'll need for the DATABRICKS_APP_URL environment variable when adding the MCP server to Claude.
Authentication
Examples
Using with Claude
Once added, you can interact with your MCP server in Claude:
Human: What prompts are available?
Claude: I can see the following prompts from your Databricks MCP server:
- build_lakeview_dashboard: Build comprehensive Lakeview dashboards with data validation
- build_ldp_pipeline: Build Lakehouse Data Pipelines for data processing
- performance_optimization: Performance optimization guidance for Databricks workloads
Sample Tool Usage
Human: Can you execute a SQL query to show databases?
Claude: I'll execute that SQL query for you using the execute_dbsql tool.
[Executes SQL and returns results]
Project Structure
โโโ server/ # FastAPI backend with MCP server
โ โโโ app.py # Main application + MCP server setup
โ โโโ tools/ # Modular MCP tools implementation
โ โ โโโ __init__.py # Tool registration and loading
โ โ โโโ core.py # Core and health tools (โ
Active)
โ โ โโโ sql_operations.py # SQL and warehouse tools (โ
Active)
โ โ โโโ unity_catalog.py # Unity Catalog tools (โ
Active)
โ โ โโโ jobs_pipelines.py # Jobs and DLT pipeline tools (โ
Active)
โ โ โโโ lakeview_dashboard.py # Lakeview dashboard tools (โ
Active)
โ โ โโโ data_management.py # Data and DBFS tools (๐ง Available)
โ โ โโโ governance.py # Governance and compliance tools (๐ง Available)
โ โ โโโ widget_specs.py # Widget specifications and utilities
โ โ โโโ layout_optimization.py # Dashboard layout optimization
โ โ โโโ utils.py # Shared utility functions
โ โโโ routers/ # API endpoints
โโโ client/ # React TypeScript frontend
โ โโโ src/ # Source code
โ โ โโโ components/ # Reusable UI components
โ โ โโโ pages/ # Page components
โ โ โโโ fastapi_client/ # Auto-generated API client
โ โโโ package.json # Node.js dependencies
โ โโโ tailwind.config.js # TailwindCSS configuration
โโโ prompts/ # MCP prompts (markdown files)
โ โโโ build_lakeview_dashboard.md # Comprehensive dashboard building prompt
โ โโโ build_ldp_pipeline.md # Lakehouse Data Pipelines building prompt
โ โโโ performance_optimization.md # Performance optimization guidance
โโโ dba_mcp_proxy/ # MCP proxy for Claude CLI
โ โโโ mcp_client.py # OAuth + proxy implementation
โโโ claude_scripts/ # Comprehensive testing tools
โ โโโ test_local_mcp_*.sh # Local MCP testing scripts (curl & proxy)
โ โโโ test_remote_mcp_*.sh # Remote MCP testing scripts (curl & proxy)
โ โโโ test_mcp_tools.py # MCP tools testing
โ โโโ test_uc_tools.py # Unity Catalog tools testing
โ โโโ inspect_local_mcp.sh # Local web-based MCP Inspector
โ โโโ inspect_remote_mcp.sh # Remote web-based MCP Inspector
โ โโโ README.md # Testing documentation
โโโ docs/ # Documentation
โ โโโ databricks_apis/ # Databricks API documentation
โ โโโ architecture.md # System architecture overview
โ โโโ core-tools.md # Core tools documentation
โ โโโ unity_catalog_tools.md # Unity Catalog tools documentation
โ โโโ api-reference.md # Complete API reference
โโโ scripts/ # Development tools
โโโ .github/workflows/ # CI/CD automation
โ โโโ ci.yml # Continuous integration
โ โโโ deploy.yml # Automated deployment
โ โโโ security.yml # Security scanning
โ โโโ test.yml # Automated testing
โโโ pyproject.toml # Python package configuration
Modular Tools System
The MCP server features a modular tools architecture that organizes tools into logical, maintainable modules. Currently 5 active modules are loaded with 88+ tools available:
Active Tool Modules
pie title Active MCP Tools by Module
"Core Tools" : 1
"SQL Operations" : 15
"Unity Catalog" : 21
"Jobs & Pipelines" : 19
"Lakeview Dashboard" : 3
Module Status Overview
Module | Status | Tools | Description |
core.py | โ
Active | 1 | Basic health checks and core functionality |
sql_operations.py | โ
Active | ~15 | SQL warehouse management, query execution, and monitoring |
unity_catalog.py | โ
Active | ~21 | Catalog, schema, table, and metadata operations |
jobs_pipelines.py | โ
Active | ~19 | Job and DLT pipeline management |
lakeview_dashboard.py | โ
Active | ~3 | Comprehensive Lakeview dashboard creation and management |
data_management.py | ๐ง Available | ~10 | DBFS operations, external locations, storage credentials |
governance.py | ๐ง Available | ~15 | Audit logs, governance rules, and data lineage |
Total: 88+ tools with 59+ currently active across 5 modules
Benefits of Modularization
Maintainability: Each module focuses on a specific domain
Readability: Smaller files are easier to navigate and debug
Collaboration: Multiple developers can work on different modules simultaneously
Testing: Individual modules can be tested in isolation
Scalability: New tools can be added to appropriate modules without cluttering
Documentation: Each module has clear purpose and can be documented independently
Building Dashboards with MCP
Dashboard Tool Architecture
The dashboard system provides comprehensive Lakeview dashboard creation through focused tools:
Tool | Purpose | Key Features |
create_dashboard_file | Complete dashboard creation | Creates .lvdash.json files with full widget support |
validate_widget_fields | Widget validation | Ensures widget configurations match dataset schemas |
get_aggregation_expression | SQL optimization | Generates optimized aggregation expressions |
Key Capabilities:
16+ Widget Types: Full support for charts, displays, filters, and interactive elements
SQL Validation: Pre-deployment query testing and schema validation
Grid Layout System: 12-column responsive design with auto-positioning
Dataset Optimization: Smart query design for multiple widget support
Production Ready: Generates deployment-ready .lvdash.json files
Dashboard Building Quickstart
The MCP server provides comprehensive tools for building Lakeview and legacy dashboards programmatically. You can create, manage, and share dashboards using simple commands in Claude.
Tool Usage Examples
Creating a Complete Dashboard:
# Using create_dashboard_file tool
create_dashboard_file(
name="Sales Dashboard",
warehouse_id="abc123",
datasets=[{
"name": "Sales Data",
"query": "SELECT product_category, revenue, order_date FROM sales_transactions WHERE revenue > 100"
}],
widgets=[
{
"type": "bar",
"dataset": "Sales Data",
"config": {
"x_field": "product_category",
"y_field": "revenue",
"title": "Revenue by Category"
},
"position": {"x": 0, "y": 0, "width": 6, "height": 4}
},
{
"type": "counter",
"dataset": "Sales Data",
"config": {
"value_field": "revenue",
"title": "Total Revenue"
},
"position": {"x": 6, "y": 0, "width": 3, "height": 2}
}
],
file_path="sales_dashboard.lvdash.json",
validate_sql=True
)
Dashboard Creation Result:
{
"success": true,
"file_path": "sales_dashboard.lvdash.json",
"file_size": 2048,
"message": "Dashboard file successfully created",
"validation_results": {
"queries_validated": [...],
"widget_validations": [...]
}
}
Basic Dashboard Creation
Human: Create a sales dashboard with revenue metrics and customer analysis
Claude: I'll create a Lakeview dashboard with revenue and customer metrics using the simplified tool architecture.
[Creates dashboard using manage_dashboard, then adds widgets with create_chart_widget and create_display_widget]
Widget Types Available
The MCP server supports all major widget types for dashboard creation:
Widget Type | Description | Use Case |
counter | Single metric display | KPIs, totals, counts |
table | Tabular data display | Detailed records, lists |
bar | Bar charts | Comparisons, categories |
line | Line charts | Trends over time |
pie | Pie charts | Proportions, distributions |
area | Area charts | Cumulative trends |
scatter | Scatter plots | Correlations, clusters |
pivot | Pivot tables | Multi-dimensional analysis |
funnel | Funnel charts | Conversion analysis |
box | Box plots | Statistical distributions |
heatmap | Heat maps | Density visualization |
markdown | Text/documentation | Explanations, headers |
SQL Query Guidelines
When building dashboards, follow these SQL best practices:
-- Use parameterized queries for flexibility
SELECT
date_trunc('month', order_date) as month,
sum(revenue) as total_revenue,
count(distinct customer_id) as unique_customers
FROM sales
WHERE order_date >= :start_date
AND order_date <= :end_date
GROUP BY 1
ORDER BY 1;
-- Include proper aliasing for widget display
SELECT
product_name as "Product",
sum(quantity) as "Units Sold",
sum(revenue) as "Revenue"
FROM sales_detail
GROUP BY 1;
Layout and Positioning
Dashboards use a 12-column grid system for responsive layouts:
# Widget positioning examples
{
"x": 0, # Start at left edge (0-11)
"y": 0, # Start at top
"width": 6, # Half width (1-12)
"height": 4 # Standard height (typically 2-8)
}
# Common layout patterns:
# Full width: width=12
# Half width: width=6
# Third width: width=4
# Quarter width: width=3
Common Dashboard Patterns
Executive Dashboard
Human: Create an executive dashboard with KPIs, trends, and department breakdowns
Claude: I'll create a comprehensive executive dashboard with:
- Top KPI counters (revenue, growth, customers)
- Revenue trend line chart
- Department performance bar chart
- Regional heat map
- Key metrics table
Analytics Dashboard
Human: Build a customer analytics dashboard with segmentation and behavior analysis
Claude: I'll create a customer analytics dashboard featuring:
- Customer segmentation pie chart
- Cohort retention heatmap
- Purchase behavior funnel
- Customer lifetime value distribution
- Product affinity analysis
Operational Dashboard
Human: Create an operations dashboard for monitoring system performance
Claude: I'll build an operational dashboard with:
- Real-time metrics counters
- Performance trend lines
- Alert status table
- Resource utilization gauges
- Error log analysis
Dashboard Management
Listing and Discovery
# Find existing dashboards
echo "List all Lakeview dashboards" | claude
# Get dashboard details
echo "Show me the configuration of dashboard-id-123" | claude
Updating Dashboards
# Add new widgets
echo "Add a revenue forecast widget to dashboard-id-123" | claude
# Modify existing widgets
echo "Update the sales chart to show quarterly data in dashboard-id-123" | claude
Sharing and Permissions
# Share with users
echo "Share dashboard-id-123 with user@company.com with READ access" | claude
# Share with groups
echo "Grant WRITE access to dashboard-id-123 for the analytics team" | claude
Best Practices
Start Simple: Begin with core metrics and expand iteratively
Use Consistent Styling: Maintain uniform colors and formatting
Optimize Queries: Use aggregations and filters to improve performance
Add Context: Include markdown widgets for explanations
Test Interactivity: Verify filters and parameters work correctly
Document Data Sources: Note which tables and schemas are used
Example: Complete Sales Dashboard
Human: Create a comprehensive sales dashboard with revenue trends, product performance, and regional analysis
Claude: I'll create a complete sales dashboard with multiple sections:
1. **KPI Section** (top row):
- Total Revenue counter
- Growth Rate counter
- Active Customers counter
- Average Order Value counter
2. **Trends Section** (second row):
- Monthly Revenue line chart
- Product Category performance bar chart
3. **Analysis Section** (third row):
- Regional sales heatmap
- Top Products table
- Customer Segment pie chart
4. **Details Section** (bottom):
- Detailed transactions table with filters
- Markdown widget with data definitions
The dashboard will be interactive with date range filters and drill-down capabilities.
Comprehensive Testing Suite
This template includes comprehensive testing tools for validating MCP functionality at multiple levels.
Testing Architecture
graph LR
subgraph "Test Types"
A[Command Line Tests]
B[Web UI Tests]
C[Integration Tests]
D[Performance Tests]
end
subgraph "Test Coverage"
E[Local MCP Server]
F[Remote MCP Server]
G[OAuth Authentication]
H[MCP Protocol]
I[Tool Discovery]
J[Tool Execution]
end
subgraph "Test Tools"
K[test_local_mcp_curl.sh]
L[test_local_mcp_proxy.sh]
M[test_remote_mcp_curl.sh]
N[test_remote_mcp_proxy.sh]
O[inspect_local_mcp.sh]
P[inspect_remote_mcp.sh]
end
A --> E
A --> F
B --> E
B --> F
C --> G
C --> H
D --> I
D --> J
K --> E
L --> E
M --> F
N --> F
O --> E
P --> F
Quick Verification
After adding the MCP server to Claude, verify it's working:
# List available prompts and tools
echo "What MCP prompts are available from databricks-mcp?" | claude
# Test a specific prompt
echo "Use the build_lakeview_dashboard prompt from databricks-mcp" | claude
Testing Scripts
The claude_scripts/ directory contains 9 comprehensive testing tools:
Command Line Tests
# Test local MCP server (requires ./watch.sh to be running)
./claude_scripts/test_local_mcp_curl.sh # Direct HTTP/curl tests
./claude_scripts/test_local_mcp_proxy.sh # MCP proxy client tests
# Test remote MCP server (requires Databricks auth and deployment)
./claude_scripts/test_remote_mcp_curl.sh # OAuth + HTTP tests
./claude_scripts/test_remote_mcp_proxy.sh # Full end-to-end MCP proxy tests
# Specialized tool testing
./claude_scripts/test_mcp_tools.py # MCP tools validation
./claude_scripts/test_uc_tools.py # Unity Catalog tools testing
Interactive Web UI Tests
# Launch MCP Inspector for visual testing
./claude_scripts/inspect_local_mcp.sh # Local server web interface
./claude_scripts/inspect_remote_mcp.sh # Remote server web interface
MCP Inspector Features:
๐ฅ๏ธ Web-based interface for interactive MCP server testing
๐ง Visual tool execution with parameter input forms
๐ Real-time request/response monitoring
๐ Protocol-level debugging and error inspection
๐ Complete tool and resource discovery
๐ Session management and connection status
Test Results Summary
Test | Status | Notes |
Local curl | โ
Pass | Authentication & headers validated |
Local proxy | โ
Pass | Full MCP protocol compliance |
Remote curl | โ
Pass | OAuth authentication & headers validated |
Remote proxy | โ
Pass | End-to-end OAuth + MCP working |
CI/CD Pipeline
The project now includes automated CI/CD workflows for quality assurance and deployment:
Continuous Integration
graph LR
A[Code Push] --> B[Automated Testing]
B --> C[Code Quality Checks]
C --> D[Security Scanning]
D --> E[Build Validation]
E --> F[Test Results]
subgraph "Quality Gates"
G[Python Linting]
H[Type Checking]
I[Unit Tests]
J[Integration Tests]
K[Frontend Build]
L[Security Scan]
end
B --> G
B --> H
B --> I
B --> J
B --> K
D --> L
Automated Workflows
CI Pipeline (.github/workflows/ci.yml):
Runs on every push and pull request
Executes comprehensive test suite
Performs code quality checks (ruff, type checking)
Validates frontend build process
Ensures all dependencies are properly configured
Deployment Pipeline (.github/workflows/deploy.yml):
Security Pipeline (.github/workflows/security.yml):
Quality Assurance
The CI pipeline ensures:
Code Quality: Consistent formatting and linting
Type Safety: TypeScript and Python type checking
Test Coverage: Comprehensive test execution
Security: Automated vulnerability scanning
Performance: Build time and resource optimization
Advanced Usage
Environment Variables
Configure in .env.local:
DATABRICKS_HOST=https://your-workspace.cloud.databricks.com
DATABRICKS_TOKEN=your-token # For local development
DATABRICKS_SQL_WAREHOUSE_ID=your-warehouse-id # For SQL tools
Creating Complex Tools
Tools can access the full Databricks SDK:
@mcp_server.tool
def create_job(name: str, notebook_path: str, cluster_id: str) -> dict:
"""Create a Databricks job."""
w = get_workspace_client()
job = w.jobs.create(
name=name,
tasks=[{
"task_key": "main",
"notebook_task": {"notebook_path": notebook_path},
"existing_cluster_id": cluster_id
}]
)
return {"job_id": job.job_id, "run_now_url": f"{DATABRICKS_HOST}/#job/{job.job_id}"}
Troubleshooting
Authentication errors: Run databricks auth login to refresh credentials
MCP not found: Ensure the app is deployed and accessible
Tool errors: Check logs at https://your-app.databricksapps.com/logz
MCP connection issues:
Check Claude logs: tail -f ~/Library/Logs/Claude/*.log
Verify the proxy works: uvx --from git+ssh://... dba-mcp-proxy --help
Test with echo pipe: echo "list your mcp commands" | claude
Cached version issues: If you get errors about missing arguments after an update:
# Clear uvx cache for this package
rm -rf ~/.cache/uv/git-v0/checkouts/*/
# Or clear entire uv cache
uv cache clean
Frontend build issues: Ensure Node.js dependencies are installed:
Contributing
Fork the repository
Add your prompts and tools
Test locally with ./watch.sh
Submit a pull request
License
See LICENSE.md