Skip to main content
Glama
ChrisChoTW

databricks-mcp

by ChrisChoTW

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
databricks_query

Execute Databricks SQL query (supports SELECT, SHOW, DESCRIBE, CREATE, ALTER). INSERT, UPDATE, DELETE, DROP and other destructive operations are blocked.

Args: sql_query: SQL query statement (preferred parameter) sql: SQL query statement (fallback for backward compatibility)

Returns: Query results as list of dicts

list_catalogs

List all catalogs

list_schemas

List schemas in the specified catalog

list_tables

List tables in the specified schema

get_table_schema

Get table structure (DESCRIBE EXTENDED)

search_tables

Search tables by name (using information_schema)

get_table_history

View Delta table change history (DESCRIBE HISTORY)

get_table_detail

View Delta table details (DESCRIBE DETAIL)

get_grants

View object permissions (SHOW GRANTS)

Args: securable_type: Object type (TABLE, SCHEMA, CATALOG, VOLUME, etc.) full_name: Full object name (catalog.schema.table format)

list_volumes

List Unity Catalog Volumes

get_table_lineage

Get table lineage (upstream/downstream tables and related notebooks/jobs)

Args: catalog: Catalog name schema: Schema name table: Table name include_notebooks: Include notebook/job associations (slower) limit: Max rows to return (default 50)

Returns: Dict with upstream, downstream tables and optionally notebook/job info

list_jobs

List Jobs

get_job

Get job details

list_job_runs

List job run history

get_job_run

Get run details

list_pipelines

List Delta Live Tables Pipelines

get_pipeline

Get pipeline status

list_pipeline_updates

List pipeline update history

list_query_history

List SQL query history

Args: warehouse_id: (Optional) Filter by specific warehouse user_id: (Optional) Filter by specific user start_time: (Optional) Start time in local format "YYYY-MM-DD HH:MM:SS" end_time: (Optional) End time in local format "YYYY-MM-DD HH:MM:SS" limit: Number of results to return

list_warehouses

List SQL Warehouses

list_clusters

List Clusters

list_workspace

List Workspace directory contents

get_cluster_metrics

Get cluster CPU/Memory/Network/Disk metrics

Data source: system.compute.node_timeline (one record per minute)

Args: cluster_id: Cluster ID start_time: Start time (ISO format), defaults to last 1 hour end_time: End time (ISO format), defaults to now limit: Max number of records to return, default 60 (1 hour)

Returns: Metrics time series and summary statistics

get_cluster_events

Get cluster event history (start, terminate, resize, errors, etc.)

Args: cluster_id: Cluster ID limit: Max number of records to return

Returns: Event list (time in local timezone)

get_run_task_metrics

Get job run task execution time details

Args: run_id: Job Run ID

Returns: Task setup/execute/cleanup times (time in local timezone)

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ChrisChoTW/databricks-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server