databricks-mcp
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": true
} |
| prompts | {
"listChanged": false
} |
| resources | {
"subscribe": false,
"listChanged": false
} |
| experimental | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| databricks_queryA | Execute Databricks SQL query (supports SELECT, SHOW, DESCRIBE, CREATE, ALTER). INSERT, UPDATE, DELETE, DROP and other destructive operations are blocked. Args: sql_query: SQL query statement (preferred parameter) sql: SQL query statement (fallback for backward compatibility) Returns: Query results as list of dicts |
| list_catalogsB | List all catalogs |
| list_schemasC | List schemas in the specified catalog |
| list_tablesC | List tables in the specified schema |
| get_table_schemaC | Get table structure (DESCRIBE EXTENDED) |
| search_tablesC | Search tables by name (using information_schema) |
| get_table_historyC | View Delta table change history (DESCRIBE HISTORY) |
| get_table_detailC | View Delta table details (DESCRIBE DETAIL) |
| get_grantsB | View object permissions (SHOW GRANTS) Args: securable_type: Object type (TABLE, SCHEMA, CATALOG, VOLUME, etc.) full_name: Full object name (catalog.schema.table format) |
| list_volumesC | List Unity Catalog Volumes |
| get_table_lineageA | Get table lineage (upstream/downstream tables and related notebooks/jobs) Args: catalog: Catalog name schema: Schema name table: Table name include_notebooks: Include notebook/job associations (slower) limit: Max rows to return (default 50) Returns: Dict with upstream, downstream tables and optionally notebook/job info |
| list_jobsD | List Jobs |
| get_jobD | Get job details |
| list_job_runsC | List job run history |
| get_job_runD | Get run details |
| list_pipelinesB | List Delta Live Tables Pipelines |
| get_pipelineD | Get pipeline status |
| list_pipeline_updatesC | List pipeline update history |
| list_query_historyB | List SQL query history Args: warehouse_id: (Optional) Filter by specific warehouse user_id: (Optional) Filter by specific user start_time: (Optional) Start time in local format "YYYY-MM-DD HH:MM:SS" end_time: (Optional) End time in local format "YYYY-MM-DD HH:MM:SS" limit: Number of results to return |
| list_warehousesB | List SQL Warehouses |
| list_clustersC | List Clusters |
| list_workspaceC | List Workspace directory contents |
| get_cluster_metricsB | Get cluster CPU/Memory/Network/Disk metrics Data source: system.compute.node_timeline (one record per minute) Args: cluster_id: Cluster ID start_time: Start time (ISO format), defaults to last 1 hour end_time: End time (ISO format), defaults to now limit: Max number of records to return, default 60 (1 hour) Returns: Metrics time series and summary statistics |
| get_cluster_eventsB | Get cluster event history (start, terminate, resize, errors, etc.) Args: cluster_id: Cluster ID limit: Max number of records to return Returns: Event list (time in local timezone) |
| get_run_task_metricsB | Get job run task execution time details Args: run_id: Job Run ID Returns: Task setup/execute/cleanup times (time in local timezone) |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/ChrisChoTW/databricks-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server