Skip to main content
Glama

Databricks MCP Server

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Schema

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
list_clusters

List all Databricks clusters

create_cluster

Create a new Databricks cluster with parameters: cluster_name (required), spark_version (required), node_type_id (required), num_workers, autotermination_minutes

terminate_cluster

Terminate a Databricks cluster with parameter: cluster_id (required)

get_cluster

Get information about a specific Databricks cluster with parameter: cluster_id (required)

start_cluster

Start a terminated Databricks cluster with parameter: cluster_id (required)

list_jobs

List all Databricks jobs

run_job

Run a Databricks job with parameters: job_id (required), notebook_params (optional)

list_notebooks

List notebooks in a workspace directory with parameter: path (required)

export_notebook

Export a notebook from the workspace with parameters: path (required), format (optional, one of: SOURCE, HTML, JUPYTER, DBC)

list_files

List files and directories in a DBFS path with parameter: dbfs_path (required)

execute_sql

Execute a SQL statement with parameters: statement (required), warehouse_id (required), catalog (optional), schema (optional)

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/JustTryAI/databricks-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server