Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@ms-fabric-mcp-serverList all the notebooks in my 'Sales Analytics' workspace"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
ms-fabric-mcp-server
A Model Context Protocol (MCP) server for Microsoft Fabric. Exposes Fabric operations (workspaces, notebooks, SQL, Livy, pipelines, jobs) as MCP tools that AI agents can invoke.
⚠️ Warning: This package is intended for development environments only and should not be used in production. It includes tools that can perform destructive operations (e.g.,
delete_notebook,delete_item) and execute arbitrary code via Livy Spark sessions. Always review AI-generated tool calls before execution.
Quick Start
The fastest way to use this MCP server is with uvx:
Installation
Authentication
Uses DefaultAzureCredential from azure-identity - no explicit credential configuration needed. This automatically tries multiple authentication methods:
Environment credentials (
AZURE_CLIENT_ID,AZURE_TENANT_ID,AZURE_CLIENT_SECRET)Managed Identity (when running on Azure)
Azure CLI credentials (
az login)VS Code credentials
Azure PowerShell credentials
No Fabric-specific auth environment variables are needed - it just works if you're authenticated via any of the above methods.
Usage
VS Code Integration
Add to your VS Code MCP settings (.vscode/mcp.json or User settings):
Claude Desktop Integration
Add to your claude_desktop_config.json:
Codex Integration
Add to your Codex config.toml:
Running Standalone
Logging & Debugging (optional)
MCP stdio servers must keep protocol traffic on stdout, so redirect stderr to capture logs.
Giving the agent read access to the log file is a powerful way to debug failures.
You can also set AZURE_LOG_LEVEL (Azure SDK) and MCP_LOG_LEVEL (server) to control verbosity.
VS Code (Bash):
VS Code (PowerShell):
Programmatic Usage (Library Mode)
Configuration
Environment variables (all optional with sensible defaults):
Variable | Default | Description |
|
| Fabric API base URL |
|
| OAuth scopes |
|
| API timeout (seconds) |
|
| Max retry attempts |
|
| Backoff factor |
|
| Livy timeout (seconds) |
|
| Livy polling interval |
|
| Livy statement wait timeout |
|
| Livy session wait timeout |
|
| Server name for MCP |
|
| Logging level |
|
| Azure SDK logging level |
Copy .env.example to .env and customize as needed.
Available Tools
The server provides 35 core tools, with 3 additional SQL tools when installed with [sql] extras (38 total).
Tool Group | Count | Tools |
Workspace | 1 |
|
Item | 2 |
|
Notebook | 6 |
|
Job | 4 |
|
Livy | 8 |
|
Pipeline | 5 |
|
Semantic Model | 7 |
|
Power BI | 2 |
|
SQL (optional) | 3 |
|
SQL Tools (Optional)
SQL tools require pyodbc and the Microsoft ODBC Driver for SQL Server:
If pyodbc is not available, the server starts with 35 tools (SQL tools disabled).
Development
Integration tests
Integration tests run against live Fabric resources and are opt-in.
To get started locally, copy the example env file:
Required environment variables:
FABRIC_INTEGRATION_TESTS=1FABRIC_TEST_WORKSPACE_NAMEFABRIC_TEST_LAKEHOUSE_NAMEFABRIC_TEST_SQL_DATABASE
Optional pipeline copy inputs:
FABRIC_TEST_SOURCE_CONNECTION_IDFABRIC_TEST_SOURCE_TYPEFABRIC_TEST_SOURCE_SCHEMAFABRIC_TEST_SOURCE_TABLEFABRIC_TEST_DEST_CONNECTION_IDFABRIC_TEST_DEST_TABLE_NAME(optional override; defaults to source table name)
Run integration tests:
Notes:
SQL tests require
pyodbcand a SQL Server ODBC driver.Tests may skip when optional dependencies or environment variables are missing.
These tests use live Fabric resources and may incur costs or side effects.
License
MIT