Skip to main content
Glama

Run Livy Statement

livy_run_statement

Execute PySpark, Scala, or SparkR code in an existing Livy session to process data in Microsoft Fabric. Supports immediate or asynchronous execution with result tracking.

Instructions

Execute code in a Livy session.

Executes PySpark, Scala, or SparkR code in an existing Livy session. The session must be in 'idle' state to accept new statements.

Important Notes:

  • Use df.show() or df.printSchema() to inspect DataFrames before accessing columns

  • SHOW TABLES returns 'namespace' column, not 'database' in Fabric

  • Avoid direct Row attribute access without schema verification

  • When with_wait=False, returns immediately with statement ID - check status separately

Parameters: workspace_id: Fabric workspace ID. lakehouse_id: Fabric lakehouse ID. session_id: Livy session ID (must be in 'idle' state). code: Code to execute (PySpark, Scala, or SparkR). kind: Statement kind - 'pyspark' (default), 'scala', or 'sparkr'. with_wait: If True (default), wait for statement completion before returning. timeout_seconds: Maximum time to wait for statement completion (default: from config).

Returns: Dictionary with statement details including id, state, output, and execution details.

Example: ```python # Execute PySpark code result = livy_run_statement( workspace_id="12345678-1234-1234-1234-123456789abc", lakehouse_id="87654321-4321-4321-4321-210987654321", session_id="0", code="df = spark.range(10)\ndf.count()", kind="pyspark", with_wait=True )

if result.get("state") == "available":
    output = result.get("output", {})
    if output.get("status") == "ok":
        print(f"Result: {output.get('data', {}).get('text/plain')}")
```

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
workspace_idYes
lakehouse_idYes
session_idYes
codeYes
kindNopyspark
with_waitNo
timeout_secondsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool executes code asynchronously (with_wait parameter controls waiting), requires session state validation, returns a dictionary with statement details, and includes practical warnings about DataFrame inspection and Fabric-specific column names. This covers execution patterns, output structure, and domain-specific pitfalls.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by important notes, parameter explanations, return details, and a comprehensive example. Every section earns its place by providing essential guidance without redundancy. The use of bullet points and code blocks enhances readability while maintaining brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 7 parameters, 0% schema coverage, no annotations, but an output schema, the description is exceptionally complete. It covers prerequisites (session state), parameter semantics, execution behavior (synchronous vs. asynchronous), return value interpretation, and practical examples with error handling. The output schema likely defines the dictionary structure, so the description appropriately focuses on usage context rather than repeating schema details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage, the description fully compensates by explaining all 7 parameters in detail. It clarifies each parameter's purpose (e.g., workspace_id as 'Fabric workspace ID', session_id 'must be in idle state'), default values (kind defaults to 'pyspark', with_wait defaults to true), and behavioral implications (timeout_seconds 'from config' if null). This adds significant meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Execute code in a Livy session') and resource ('existing Livy session'), distinguishing it from siblings like livy_create_session or livy_cancel_statement. It specifies the supported languages (PySpark, Scala, SparkR) and the required session state ('idle'), making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (e.g., 'session must be in idle state'), when not to use it (e.g., 'Avoid direct Row attribute access without schema verification'), and alternatives (e.g., 'check status separately' when with_wait=False). It also includes important notes for correct usage, such as handling SHOW TABLES in Fabric and DataFrame inspection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bablulawrence/ms-fabric-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server