Skip to main content
Glama
sqllocks-arch

Fabric Data Engineering MCP Server

Fabric Data Engineering MCP Server

A Model Context Protocol (MCP) server for Microsoft Fabric Data Engineering that provides full execution access to Fabric Notebooks, Pipelines, Lakehouses, and Spark jobs.

Why This Exists

Currently available MCP servers don't cover Fabric Data Engineering execution:

  • @microsoft/fabric-mcp → API specs only, no execution

  • @azure/mcp → Azure management, not Fabric-specific

  • powerbi-modeling-mcp → Semantic models only

  • @bytebase/dbhub → SQL queries only

This MCP server fills the gap by providing execution capabilities for Fabric Data Engineering workloads.

Features

Notebook Operations

  • notebook_list - List all notebooks in a workspace

  • notebook_run - Execute a notebook (with optional parameters)

  • notebook_run_status - Check the status of a running notebook

  • notebook_run_cancel - Cancel a running notebook

Pipeline Operations

  • pipeline_list - List all data pipelines in a workspace

  • pipeline_run - Execute a pipeline (with optional parameters)

  • pipeline_run_status - Check the status of a running pipeline

  • pipeline_run_cancel - Cancel a running pipeline

Lakehouse Operations

  • lakehouse_list - List all Lakehouses in a workspace

  • lakehouse_get - Get Lakehouse details (including SQL endpoint info)

  • lakehouse_create - Create a new Lakehouse

  • lakehouse_delete - Delete a Lakehouse

  • lakehouse_tables_list - List all tables in a Lakehouse

  • lakehouse_table_load - Load data from OneLake into a table

Spark Job Operations

  • spark_job_list - List all Spark job definitions

  • spark_job_run - Execute a Spark job definition

  • spark_job_status - Check the status of a Spark job run

  • spark_job_cancel - Cancel a running Spark job

Workspace Operations

  • workspace_list - List all accessible workspaces

  • workspace_get - Get workspace details

  • workspace_items_list - List all items in a workspace (with optional type filter)

Scheduler Operations

  • schedule_list - List all schedules for an item

  • schedule_create - Create a schedule (Daily, Weekly, or Cron)

  • schedule_delete - Delete a schedule

  • schedule_enable - Enable a schedule

  • schedule_disable - Disable a schedule

Installation

npm install fabric-data-engineering-mcp

Or run directly with npx:

npx fabric-data-engineering-mcp

Authentication

The server supports multiple authentication methods via Azure Identity:

No configuration needed! Just run:

az login

Then use the MCP server - it will automatically use your Azure CLI credentials.

2. Environment Variables (Service Principal)

Set these environment variables:

export AZURE_TENANT_ID="your-tenant-id" export AZURE_CLIENT_ID="your-client-id" export AZURE_CLIENT_SECRET="your-client-secret"

3. Managed Identity (Azure Hosted)

When running in Azure (App Service, Functions, VMs, AKS), the server automatically uses Managed Identity.

4. VS Code Azure Extension

If you have the Azure extension installed and signed in, the server can use those credentials.

MCP Configuration

For Claude Desktop / VS Code

Add to your MCP settings:

Using Azure CLI auth (no credentials needed):

{ "mcpServers": { "fabric-data-engineering": { "command": "npx", "args": ["-y", "fabric-data-engineering-mcp"] } } }

Using Service Principal:

{ "mcpServers": { "fabric-data-engineering": { "command": "npx", "args": ["-y", "fabric-data-engineering-mcp"], "env": { "AZURE_TENANT_ID": "your-tenant-id", "AZURE_CLIENT_ID": "your-client-id", "AZURE_CLIENT_SECRET": "your-secret" } } } }

Required Permissions

Your Azure identity needs the following permissions in Microsoft Fabric:

  • Workspace: At least Contributor role on target workspaces

  • Items: Execute permissions on notebooks, pipelines, and Spark jobs

  • Lakehouses: Read/Write permissions for Lakehouse operations

Usage Examples

List Workspaces

User: List all my Fabric workspaces Assistant: [calls workspace_list]

Run a Notebook

User: Run the "Daily ETL" notebook in my "Analytics" workspace Assistant: [calls workspace_list to find workspace ID] [calls notebook_list to find notebook ID] [calls notebook_run with workspace and notebook IDs]

Check Job Status

User: What's the status of my notebook run? Assistant: [calls notebook_run_status with run ID]

Create a Lakehouse

User: Create a new Lakehouse called "SalesData" in my workspace Assistant: [calls lakehouse_create with displayName "SalesData"]

Schedule a Pipeline

User: Schedule my "Nightly Refresh" pipeline to run every day at 2am Assistant: [calls schedule_create with Daily schedule type]

Complementary MCP Servers

This server is designed to work alongside:

  • @bytebase/dbhub → SQL queries against Fabric Warehouse/Lakehouse SQL endpoints

  • powerbi-modeling-mcp → Semantic model operations via XMLA

  • @azure/mcp → General Azure resource management

  • @microsoft/fabric-mcp → API documentation and OneLake file operations

Development

Build from Source

git clone https://github.com/your-repo/fabric-data-engineering-mcp cd fabric-data-engineering-mcp npm install npm run build

Run in Development Mode

npm run dev

Type Check

npm run typecheck

API Reference

Long-Running Operations

Notebook runs, pipeline runs, and Spark job runs are asynchronous operations. The *_run tools return immediately with a runId that you can use with *_run_status to poll for completion.

Status values:

  • NotStarted - Job is queued but hasn't started

  • InProgress - Job is currently running

  • Completed - Job finished successfully

  • Failed - Job failed (check failureReason)

  • Cancelled - Job was cancelled

Error Handling

The server provides detailed error messages:

{ "errorCode": "ItemNotFound", "message": "The specified item was not found" }

Common error codes:

  • ItemNotFound - Workspace, notebook, pipeline, or Lakehouse doesn't exist

  • Unauthorized - Missing permissions

  • InvalidRequest - Invalid parameters

  • TooManyRequests - Rate limited (server auto-retries)

Environment Variables

Variable

Description

Required

AZURE_TENANT_ID

Azure AD tenant ID

For service principal auth

AZURE_CLIENT_ID

Application (client) ID

For service principal auth

AZURE_CLIENT_SECRET

Client secret

For service principal auth

FABRIC_AUTH_METHOD

Auth method: default, client_credentials, or interactive

No (default: default)

Troubleshooting

"No valid Azure credentials found"

Run az login to authenticate with Azure CLI, or set the service principal environment variables.

"Application not found in tenant"

Verify your AZURE_CLIENT_ID and AZURE_TENANT_ID are correct.

"Multi-factor authentication required"

Use Azure CLI auth (az login) which handles MFA, or configure your app registration for MFA.

Rate Limiting

The server automatically retries on HTTP 429 responses with exponential backoff. If you're still seeing rate limit errors, reduce the frequency of your requests.

License

MIT

Contributing

Contributions welcome! Please read our contributing guidelines first.

-
security - not tested
F
license - not found
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sqllocks-arch/mcp-server-review'

If you have feedback or need assistance with the MCP directory API, please join our Discord server