Skip to main content
Glama

get_notebook_execution_details

Retrieve execution metadata for notebook runs, including timing, resource usage, and execution state, to monitor performance and verify resource allocation.

Instructions

Get detailed execution information for a notebook run by job instance ID.

Retrieves execution metadata from the Fabric Notebook Livy Sessions API, which provides detailed timing, resource usage, and execution state information.

Use this tool when:

  • You want to check the status and timing of a completed notebook run

  • You need to verify resource allocation for a notebook execution

  • You want to analyze execution performance (queue time, run time)

Note: This method returns execution metadata (timing, state, resource usage). Cell-level outputs are only available for active sessions. Once a notebook job completes, individual cell outputs cannot be retrieved via the REST API. To capture cell outputs, use mssparkutils.notebook.exit() in your notebook and access the exitValue through Data Pipeline activities.

Parameters: workspace_name: The display name of the workspace containing the notebook. notebook_name: Name of the notebook. job_instance_id: The job instance ID from execute_notebook or run_on_demand_job result.

Returns: Dictionary with: - status: "success" or "error" - message: Description of the result - session: Full Livy session details (state, timing, resources) - execution_summary: Summarized execution information including: - state: Execution state (Success, Failed, Cancelled, etc.) - spark_application_id: Spark application identifier - queued_duration_seconds: Time spent in queue - running_duration_seconds: Actual execution time - total_duration_seconds: Total end-to-end time - driver_memory, driver_cores, executor_memory, etc.

Example: ```python # After executing a notebook exec_result = run_on_demand_job( workspace_name="Analytics", item_name="ETL_Pipeline", item_type="Notebook", job_type="RunNotebook" )

# Get detailed execution information details = get_notebook_execution_details( workspace_name="Analytics", notebook_name="ETL_Pipeline", job_instance_id=exec_result["job_instance_id"] ) if details["status"] == "success": summary = details["execution_summary"] print(f"State: {summary['state']}") print(f"Duration: {summary['total_duration_seconds']}s") print(f"Spark App ID: {summary['spark_application_id']}") ```

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
workspace_nameYes
notebook_nameYes
job_instance_idYes

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bablulawrence/ms-fabric-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server