Databricks MCP Server
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Capabilities
Server capabilities have not been inspected yet.
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| list_clustersC | List all Databricks clusters |
| create_clusterC | Create a new Databricks cluster with parameters: cluster_name (required), spark_version (required), node_type_id (required), num_workers, autotermination_minutes |
| terminate_clusterC | Terminate a Databricks cluster with parameter: cluster_id (required) |
| get_clusterC | Get information about a specific Databricks cluster with parameter: cluster_id (required) |
| start_clusterC | Start a terminated Databricks cluster with parameter: cluster_id (required) |
| list_jobsC | List all Databricks jobs |
| run_jobC | Run a Databricks job with parameters: job_id (required), notebook_params (optional) |
| list_notebooksC | List notebooks in a workspace directory with parameter: path (required) |
| export_notebookC | Export a notebook from the workspace with parameters: path (required), format (optional, one of: SOURCE, HTML, JUPYTER, DBC) |
| list_filesC | List files and directories in a DBFS path with parameter: dbfs_path (required) |
| execute_sqlC | Execute a SQL statement with parameters: statement (required), warehouse_id (required), catalog (optional), schema (optional) |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/JustTryAI/databricks-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server