Skip to main content
Glama

run_job

Execute Databricks jobs by specifying job parameters, enabling automated workflow runs with configurable notebook inputs.

Instructions

Run a Databricks job with parameters: job_id (required), notebook_params (optional)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYes

Implementation Reference

  • MCP tool handler for 'run_job', registered via @self.tool decorator. Parses input parameters and delegates to the core jobs.run_job function, returning JSON-formatted results or errors.
    @self.tool( name="run_job", description="Run a Databricks job with parameters: job_id (required), notebook_params (optional)", ) async def run_job(params: Dict[str, Any]) -> List[TextContent]: logger.info(f"Running job with params: {params}") try: notebook_params = params.get("notebook_params", {}) result = await jobs.run_job(params.get("job_id"), notebook_params) return [{"text": json.dumps(result)}] except Exception as e: logger.error(f"Error running job: {str(e)}") return [{"text": json.dumps({"error": str(e)})}]
  • Core helper function implementing the Databricks job execution via API call to /api/2.0/jobs/run-now with job_id and optional notebook_params.
    async def run_job(job_id: int, notebook_params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]: """ Run a job now. Args: job_id: ID of the job to run notebook_params: Optional parameters for the notebook Returns: Response containing the run ID Raises: DatabricksAPIError: If the API request fails """ logger.info(f"Running job: {job_id}") run_params = {"job_id": job_id} if notebook_params: run_params["notebook_params"] = notebook_params return make_api_request("POST", "/api/2.0/jobs/run-now", data=run_params)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/JustTryAI/databricks-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server