Skip to main content
Glama
samhavens

Databricks MCP Server

by samhavens

create_job

Set up a new Databricks job to execute a notebook, using serverless compute by default for running scheduled or one-time data processing tasks.

Instructions

Create a new Databricks job to run a notebook (uses serverless by default)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
job_nameYes
notebook_pathYes
timeout_secondsNo
parametersNo
cluster_idNo
use_serverlessNo

Implementation Reference

  • MCP tool handler for create_job: constructs job configuration for a notebook task (serverless or existing cluster) and delegates to the jobs API wrapper. This is the primary entrypoint for the MCP 'create_job' tool.
    @mcp.tool()
    async def create_job(
        job_name: str,
        notebook_path: str,
        timeout_seconds: int = 3600,
        parameters: Optional[dict] = None,
        cluster_id: Optional[str] = None,
        use_serverless: bool = True
    ) -> str:
        """Create a new Databricks job to run a notebook (uses serverless by default)"""
        logger.info(f"Creating job: {job_name}")
        try:
            task_config = {
                "task_key": "main_task",
                "notebook_task": {
                    "notebook_path": notebook_path,
                    "base_parameters": parameters or {}
                },
                "timeout_seconds": timeout_seconds
            }
            
            # Configure compute: serverless vs cluster  
            if use_serverless:
                # For serverless compute, simply don't specify any cluster configuration
                # Databricks will automatically use serverless compute
                pass
            elif cluster_id:
                task_config["existing_cluster_id"] = cluster_id
            else:
                raise ValueError("Must specify either use_serverless=True or provide cluster_id")
                
            job_config = {
                "name": job_name,
                "tasks": [task_config],
                "format": "MULTI_TASK"
            }
            
            result = await jobs.create_job(job_config)
            return json.dumps(result)
        except Exception as e:
            logger.error(f"Error creating job: {str(e)}")
            return json.dumps({"error": str(e)})
  • Underlying helper function that performs the actual Databricks API call to create a job via POST /api/2.0/jobs/create. Called by the MCP handler.
    async def create_job(job_config: Dict[str, Any]) -> Dict[str, Any]:
        """
        Create a new Databricks job.
        
        Args:
            job_config: Job configuration
            
        Returns:
            Response containing the job ID
            
        Raises:
            DatabricksAPIError: If the API request fails
        """
        logger.info("Creating new job")
        return make_api_request("POST", "/api/2.0/jobs/create", data=job_config)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/samhavens/databricks-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server