Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Fabric Data Engineering MCP ServerRun the Daily ETL notebook in the Analytics workspace"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Fabric Data Engineering MCP Server
A Model Context Protocol (MCP) server for Microsoft Fabric Data Engineering that provides full execution access to Fabric Notebooks, Pipelines, Lakehouses, and Spark jobs.
Why This Exists
Currently available MCP servers don't cover Fabric Data Engineering execution:
@microsoft/fabric-mcp→ API specs only, no execution@azure/mcp→ Azure management, not Fabric-specificpowerbi-modeling-mcp→ Semantic models only@bytebase/dbhub→ SQL queries only
This MCP server fills the gap by providing execution capabilities for Fabric Data Engineering workloads.
Features
Notebook Operations
notebook_list - List all notebooks in a workspace
notebook_run - Execute a notebook (with optional parameters)
notebook_run_status - Check the status of a running notebook
notebook_run_cancel - Cancel a running notebook
Pipeline Operations
pipeline_list - List all data pipelines in a workspace
pipeline_run - Execute a pipeline (with optional parameters)
pipeline_run_status - Check the status of a running pipeline
pipeline_run_cancel - Cancel a running pipeline
Lakehouse Operations
lakehouse_list - List all Lakehouses in a workspace
lakehouse_get - Get Lakehouse details (including SQL endpoint info)
lakehouse_create - Create a new Lakehouse
lakehouse_delete - Delete a Lakehouse
lakehouse_tables_list - List all tables in a Lakehouse
lakehouse_table_load - Load data from OneLake into a table
Spark Job Operations
spark_job_list - List all Spark job definitions
spark_job_run - Execute a Spark job definition
spark_job_status - Check the status of a Spark job run
spark_job_cancel - Cancel a running Spark job
Workspace Operations
workspace_list - List all accessible workspaces
workspace_get - Get workspace details
workspace_items_list - List all items in a workspace (with optional type filter)
Scheduler Operations
schedule_list - List all schedules for an item
schedule_create - Create a schedule (Daily, Weekly, or Cron)
schedule_delete - Delete a schedule
schedule_enable - Enable a schedule
schedule_disable - Disable a schedule
Installation
Or run directly with npx:
Authentication
The server supports multiple authentication methods via Azure Identity:
1. Azure CLI (Recommended for Development)
No configuration needed! Just run:
Then use the MCP server - it will automatically use your Azure CLI credentials.
2. Environment Variables (Service Principal)
Set these environment variables:
3. Managed Identity (Azure Hosted)
When running in Azure (App Service, Functions, VMs, AKS), the server automatically uses Managed Identity.
4. VS Code Azure Extension
If you have the Azure extension installed and signed in, the server can use those credentials.
MCP Configuration
For Claude Desktop / VS Code
Add to your MCP settings:
Using Azure CLI auth (no credentials needed):
Using Service Principal:
Required Permissions
Your Azure identity needs the following permissions in Microsoft Fabric:
Workspace: At least Contributor role on target workspaces
Items: Execute permissions on notebooks, pipelines, and Spark jobs
Lakehouses: Read/Write permissions for Lakehouse operations
Usage Examples
List Workspaces
Run a Notebook
Check Job Status
Create a Lakehouse
Schedule a Pipeline
Complementary MCP Servers
This server is designed to work alongside:
@bytebase/dbhub → SQL queries against Fabric Warehouse/Lakehouse SQL endpoints
powerbi-modeling-mcp → Semantic model operations via XMLA
@azure/mcp → General Azure resource management
@microsoft/fabric-mcp → API documentation and OneLake file operations
Development
Build from Source
Run in Development Mode
Type Check
API Reference
Long-Running Operations
Notebook runs, pipeline runs, and Spark job runs are asynchronous operations. The *_run tools return immediately with a runId that you can use with *_run_status to poll for completion.
Status values:
NotStarted- Job is queued but hasn't startedInProgress- Job is currently runningCompleted- Job finished successfullyFailed- Job failed (checkfailureReason)Cancelled- Job was cancelled
Error Handling
The server provides detailed error messages:
Common error codes:
ItemNotFound- Workspace, notebook, pipeline, or Lakehouse doesn't existUnauthorized- Missing permissionsInvalidRequest- Invalid parametersTooManyRequests- Rate limited (server auto-retries)
Environment Variables
Variable | Description | Required |
| Azure AD tenant ID | For service principal auth |
| Application (client) ID | For service principal auth |
| Client secret | For service principal auth |
| Auth method: | No (default: |
Troubleshooting
"No valid Azure credentials found"
Run az login to authenticate with Azure CLI, or set the service principal environment variables.
"Application not found in tenant"
Verify your AZURE_CLIENT_ID and AZURE_TENANT_ID are correct.
"Multi-factor authentication required"
Use Azure CLI auth (az login) which handles MFA, or configure your app registration for MFA.
Rate Limiting
The server automatically retries on HTTP 429 responses with exponential backoff. If you're still seeing rate limit errors, reduce the frequency of your requests.
License
MIT
Contributing
Contributions welcome! Please read our contributing guidelines first.