Skip to main content
Glama
samhavens

Databricks MCP Server

by samhavens

create_job

Set up a new Databricks job to execute a notebook, using serverless compute by default for running scheduled or one-time data processing tasks.

Instructions

Create a new Databricks job to run a notebook (uses serverless by default)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
job_nameYes
notebook_pathYes
timeout_secondsNo
parametersNo
cluster_idNo
use_serverlessNo

Implementation Reference

  • MCP tool handler for create_job: constructs job configuration for a notebook task (serverless or existing cluster) and delegates to the jobs API wrapper. This is the primary entrypoint for the MCP 'create_job' tool.
    @mcp.tool()
    async def create_job(
        job_name: str,
        notebook_path: str,
        timeout_seconds: int = 3600,
        parameters: Optional[dict] = None,
        cluster_id: Optional[str] = None,
        use_serverless: bool = True
    ) -> str:
        """Create a new Databricks job to run a notebook (uses serverless by default)"""
        logger.info(f"Creating job: {job_name}")
        try:
            task_config = {
                "task_key": "main_task",
                "notebook_task": {
                    "notebook_path": notebook_path,
                    "base_parameters": parameters or {}
                },
                "timeout_seconds": timeout_seconds
            }
            
            # Configure compute: serverless vs cluster  
            if use_serverless:
                # For serverless compute, simply don't specify any cluster configuration
                # Databricks will automatically use serverless compute
                pass
            elif cluster_id:
                task_config["existing_cluster_id"] = cluster_id
            else:
                raise ValueError("Must specify either use_serverless=True or provide cluster_id")
                
            job_config = {
                "name": job_name,
                "tasks": [task_config],
                "format": "MULTI_TASK"
            }
            
            result = await jobs.create_job(job_config)
            return json.dumps(result)
        except Exception as e:
            logger.error(f"Error creating job: {str(e)}")
            return json.dumps({"error": str(e)})
  • Underlying helper function that performs the actual Databricks API call to create a job via POST /api/2.0/jobs/create. Called by the MCP handler.
    async def create_job(job_config: Dict[str, Any]) -> Dict[str, Any]:
        """
        Create a new Databricks job.
        
        Args:
            job_config: Job configuration
            
        Returns:
            Response containing the job ID
            
        Raises:
            DatabricksAPIError: If the API request fails
        """
        logger.info("Creating new job")
        return make_api_request("POST", "/api/2.0/jobs/create", data=job_config)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the default serverless behavior, which adds some context, but it doesn't disclose critical behavioral traits such as whether this is a mutating operation, what permissions are required, if there are rate limits, or what the output looks like. For a creation tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that is front-loaded with the core purpose and includes an important behavioral detail (serverless default). There is no wasted text, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a job creation tool with 6 parameters, no annotations, and no output schema, the description is incomplete. It lacks details on parameter meanings, behavioral implications (e.g., mutation effects, error handling), and expected outputs, leaving significant gaps for an AI agent to understand and use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. It only mentions 'serverless by default', which relates to the 'use_serverless' parameter, but it doesn't explain the semantics of other parameters like 'job_name', 'notebook_path', 'timeout_seconds', 'parameters', or 'cluster_id'. This fails to add sufficient meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create a new Databricks job') and the resource ('to run a notebook'), which is specific and actionable. It distinguishes from siblings like 'run_job' by focusing on creation rather than execution, though it doesn't explicitly contrast with all siblings like 'create_cluster' or 'create_notebook'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for creating jobs to run notebooks, with a default behavior ('uses serverless by default'), but it doesn't provide explicit guidance on when to use this tool versus alternatives like 'run_job' or 'create_cluster'. It offers some context but lacks clear exclusions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/samhavens/databricks-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server