livy_run_statement
Execute PySpark, Scala, or SparkR code in an existing Livy session to process data in Microsoft Fabric. Supports immediate or asynchronous execution with result tracking.
Instructions
Execute code in a Livy session.
Executes PySpark, Scala, or SparkR code in an existing Livy session. The session must be in 'idle' state to accept new statements.
Important Notes:
Use df.show() or df.printSchema() to inspect DataFrames before accessing columns
SHOW TABLES returns 'namespace' column, not 'database' in Fabric
Avoid direct Row attribute access without schema verification
When with_wait=False, returns immediately with statement ID - check status separately
Parameters: workspace_id: Fabric workspace ID. lakehouse_id: Fabric lakehouse ID. session_id: Livy session ID (must be in 'idle' state). code: Code to execute (PySpark, Scala, or SparkR). kind: Statement kind - 'pyspark' (default), 'scala', or 'sparkr'. with_wait: If True (default), wait for statement completion before returning. timeout_seconds: Maximum time to wait for statement completion (default: from config).
Returns: Dictionary with statement details including id, state, output, and execution details.
Example: ```python # Execute PySpark code result = livy_run_statement( workspace_id="12345678-1234-1234-1234-123456789abc", lakehouse_id="87654321-4321-4321-4321-210987654321", session_id="0", code="df = spark.range(10)\ndf.count()", kind="pyspark", with_wait=True )
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| workspace_id | Yes | ||
| lakehouse_id | Yes | ||
| session_id | Yes | ||
| code | Yes | ||
| kind | No | pyspark | |
| with_wait | No | ||
| timeout_seconds | No |