run_job
Execute a Databricks job using specified parameters, including job ID and optional notebook parameters, to manage and automate workflows efficiently.
Instructions
Run a Databricks job with parameters: job_id (required), notebook_params (optional)
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Implementation Reference
- MCP tool handler function that executes the run_job tool by calling the underlying jobs API with parsed parameters.async def run_job(params: Dict[str, Any]) -> List[TextContent]: logger.info(f"Running job with params: {params}") try: notebook_params = params.get("notebook_params", {}) result = await jobs.run_job(params.get("job_id"), notebook_params) return [{"text": json.dumps(result)}] except Exception as e: logger.error(f"Error running job: {str(e)}") return [{"text": json.dumps({"error": str(e)})}]
- src/server/databricks_mcp_server.py:129-132 (registration)Registration of the run_job tool using the @self.tool decorator with name and description defining input parameters.@self.tool( name="run_job", description="Run a Databricks job with parameters: job_id (required), notebook_params (optional)", )
- src/api/jobs.py:31-51 (helper)Core helper function implementing the Databricks Jobs API call to run a job immediately.async def run_job(job_id: int, notebook_params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]: """ Run a job now. Args: job_id: ID of the job to run notebook_params: Optional parameters for the notebook Returns: Response containing the run ID Raises: DatabricksAPIError: If the API request fails """ logger.info(f"Running job: {job_id}") run_params = {"job_id": job_id} if notebook_params: run_params["notebook_params"] = notebook_params return make_api_request("POST", "/api/2.0/jobs/run-now", data=run_params)