create_cluster
Set up a new Databricks cluster by defining its name, Spark version, node type, and worker count, enabling execution of data processing workflows.
Instructions
Create a new Databricks cluster
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| cluster_name | Yes | ||
| node_type_id | Yes | ||
| num_workers | No | ||
| spark_version | Yes |
Implementation Reference
- MCP tool handler for 'create_cluster'. Decorated with @mcp.tool() which registers the tool with this name. Handles parameters, builds config, calls the core API helper, and returns JSON.@mcp.tool() async def create_cluster( cluster_name: str, spark_version: str, node_type_id: str, num_workers: int = 1 ) -> str: """Create a new Databricks cluster""" logger.info(f"Creating cluster: {cluster_name}") try: cluster_config = { "cluster_name": cluster_name, "spark_version": spark_version, "node_type_id": node_type_id, "num_workers": num_workers, "enable_elastic_disk": True } result = await clusters.create_cluster(cluster_config) return json.dumps(result) except Exception as e: logger.error(f"Error creating cluster: {str(e)}") return json.dumps({"error": str(e)})
- src/server/simple_databricks_mcp_server.py:40-40 (registration)The @mcp.tool() decorator registers this function as the 'create_cluster' tool in the FastMCP server.@mcp.tool()
- src/api/clusters.py:14-29 (helper)Core helper function that performs the actual Databricks API call to create a cluster. Called by the MCP tool handler.async def create_cluster(cluster_config: Dict[str, Any]) -> Dict[str, Any]: """ Create a new Databricks cluster. Args: cluster_config: Cluster configuration Returns: Response containing the cluster ID Raises: DatabricksAPIError: If the API request fails """ logger.info("Creating new cluster") return make_api_request("POST", "/api/2.0/clusters/create", data=cluster_config)