Skip to main content
Glama
tmdaidevs

Force Fabric MCP Server

by tmdaidevs

✨ Key Features

πŸ” Detect β€” 120 Rules Across 4 Fabric Items

Item

Rules

What's Scanned

🏠 Lakehouse

29

SQL Endpoint + OneLake Delta Log (VACUUM history, file sizes, partitioning, retention)

πŸ—οΈ Warehouse

39

Schema, query performance, security (PII, RLS), database config

πŸ“Š Eventhouse

20/db

Extent fragmentation, caching/retention/merge/encoding/partitioning policies, ingestion, query performance, materialized views, stored functions

πŸ“ Semantic Model

32

DAX expression anti-patterns, model structure, COLUMNSTATISTICS BPA

120 total

πŸ”§ Fix β€” 45 Auto-Fixable Issues

Item

Auto-Fixes

Method

πŸ—οΈ Warehouse

12 fixes

SQL DDL executed directly

🏠 Lakehouse

14 fixes

REST API (3) + Notebook Spark SQL (11)

πŸ“ Semantic Model

12 fixes

model.bim REST API (6) + Notebook sempy (6)

πŸ“Š Eventhouse

7 fixes

KQL management commands (with dry-run preview)

45 total

πŸ“Š Unified Output

Every scan returns a clean results table β€” only issues shown, passed rules counted in summary:

29 rules β€” βœ… 18 passed | πŸ”΄ 1 failed | 🟑 10 warning

| Rule | Status | Finding | Recommendation |
|------|--------|---------|----------------|
| LH-007 Key Columns Are NOT NULL | πŸ”΄ | 16 key column(s) allow NULL: table.finding_id, ... | Add NOT NULL constraints |
| LH-017 Regular VACUUM Executed | 🟑 | 4 table(s) need VACUUM: table1, table2, ... | Run VACUUM weekly |

πŸš€ Quick Start

Prerequisites

  • Node.js 18+

  • Azure CLI with az login completed

  • Fabric capacity with items to scan

Install

git clone https://github.com/tmdaidevs/Force-Fabric-MCP-Server.git
cd Force-Fabric-MCP-Server
npm install
npm run build

Configure VS Code

Add to .vscode/mcp.json in your project:

{
  "servers": {
    "fabric-optimization": {
      "type": "stdio",
      "command": "node",
      "args": ["dist/index.js"],
      "cwd": "/path/to/Force-Fabric-MCP-Server"
    }
  }
}

Use

1. "Login to Fabric with azure_cli"
2. "List all lakehouses in workspace <id>"
3. "Scan lakehouse <id> in workspace <id>"
4. "Fix warehouse <id> in workspace <id>"

πŸ” Detect & Scan

Available Scan Tools

Tool

What It Does

lakehouse_optimization_recommendations

Scans SQL Endpoint + reads Delta Log files from OneLake

warehouse_optimization_recommendations

Connects via SQL and runs 39 diagnostic queries

warehouse_analyze_query_patterns

Focused analysis of slow/frequent/failed queries

eventhouse_optimization_recommendations

Runs KQL diagnostics on each KQL database

semantic_model_optimization_recommendations

Executes DAX + MDSCHEMA DMVs for BPA analysis

Data Sources Used

                          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                          β”‚         Fabric REST API             β”‚
                          β”‚  Workspaces, Items, Metadata        β”‚
                          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                         β”‚
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β–Ό              β–Ό               β–Ό               β–Ό              β–Ό
   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
   β”‚  SQL Client β”‚ β”‚ KQL REST β”‚ β”‚ OneLake ADLS β”‚ β”‚ DAX API  β”‚ β”‚ MDSCHEMA DMV β”‚
   β”‚  (tedious)  β”‚ β”‚   API    β”‚ β”‚  Gen2 API    β”‚ β”‚executeQryβ”‚ β”‚  via REST    β”‚
   β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
          β”‚              β”‚              β”‚              β”‚              β”‚
    Lakehouse SQL   Eventhouse    Delta Log JSON   Semantic     Semantic
    Warehouse SQL   KQL DBs       File Metadata    Model DAX    Model Meta

πŸ”§ Auto-Fix

Warehouse Fixes (warehouse_fix)

Run all safe fixes or specify individual rule IDs:

Rule ID

What It Fixes

SQL Command

WH-001

Missing primary keys

ALTER TABLE ADD CONSTRAINT PK NOT ENFORCED

WH-008

Stale statistics (>30 days)

UPDATE STATISTICS [table]

WH-009

Disabled constraints

ALTER TABLE WITH CHECK CHECK CONSTRAINT ALL

WH-016

Missing audit columns

ALTER TABLE ADD created_at DATETIME2 DEFAULT GETDATE()

WH-018

Unmasked sensitive data

ALTER COLUMN ADD MASKED WITH (FUNCTION='...')

WH-026

Auto-update statistics off

ALTER DATABASE SET AUTO_UPDATE_STATISTICS ON

WH-027

Result set caching off

ALTER DATABASE SET RESULT_SET_CACHING ON

WH-028

Snapshot isolation off

ALTER DATABASE SET ALLOW_SNAPSHOT_ISOLATION ON

WH-029

Page verify not CHECKSUM

ALTER DATABASE SET PAGE_VERIFY CHECKSUM

WH-030

ANSI settings off

ALTER DATABASE SET ANSI_NULLS ON; ...

WH-032

Missing statistics

UPDATE STATISTICS [table]

WH-036

NOT NULL without defaults

ALTER TABLE ADD DEFAULT ... FOR column

Eventhouse Fixes (eventhouse_fix)

Supports dry-run mode (dryRun: true) to preview commands without executing them.

Rule ID

What It Fixes

KQL Command

EH-002

Fragmented extents

.merge table ['name']

EH-004

Missing caching policy

.alter table/database policy caching hot = 30d

EH-005

Missing retention policy

.alter table/database policy retention softdelete = 365d

EH-006

Unhealthy materialized views

.enable materialized-view ['name']

EH-014

Missing ingestion batching

.alter table/database policy ingestionbatching ...

EH-016

Large tables without partitioning

.alter table policy partitioning ...

EH-017

Suboptimal merge policy

.alter table policy merge ...

Lakehouse Fixes (lakehouse_run_table_maintenance)

Fix

Parameters

OPTIMIZE with V-Order

optimizeSettings: { vOrder: true }

Z-Order by columns

optimizeSettings: { zOrderColumns: ["col1", "col2"] }

VACUUM stale files

vacuumSettings: { retentionPeriod: "7.00:00:00" }

Semantic Model Fixes (semantic_model_fix)

Downloads model.bim, applies modifications, uploads back:

Fix ID

What It Fixes

Method

SM-FIX-FORMAT

Add format strings to measures without one

model.bim

SM-FIX-DESC

Add descriptions to visible tables

model.bim

SM-FIX-HIDDEN

Set IsAvailableInMDX=false on hidden columns

model.bim

SM-FIX-DATE

Mark date/calendar tables as Date table

model.bim

SM-FIX-KEY

Set IsKey=true on PK columns in relationships

model.bim

SM-FIX-AUTODATE

Remove auto-date tables

model.bim

πŸ““ Notebook-Based Fixes

For fixes that require Spark SQL, the MCP server creates a temporary Notebook, runs it, and deletes it:

1. POST /notebooks              β†’ Create temp notebook with fix code
2. POST /items/{id}/jobs        β†’ Execute notebook
3. GET  /items/{id}/jobs/{job}  β†’ Poll until complete
4. DELETE /notebooks/{id}       β†’ Clean up

Lakehouse Notebook Fixes

Rule

Spark SQL Command

LH-003

CONVERT TO DELTA spark_catalog.lakehouse.table

LH-005

DROP TABLE lakehouse.table

LH-009

ALTER TABLE lakehouse.table RENAME COLUMN old TO new

LH-014

ALTER TABLE t ADD COLUMN created_at TIMESTAMP DEFAULT current_timestamp()

LH-020

ALTER TABLE t SET TBLPROPERTIES ('delta.autoOptimize.optimizeWrite'='true')

LH-021

ALTER TABLE t SET TBLPROPERTIES ('delta.logRetentionDuration'='interval 30 days')

LH-024

ALTER TABLE t SET TBLPROPERTIES ('delta.dataSkippingNumIndexedCols'='32')

LH-S04

ALTER TABLE t ADD COLUMN id BIGINT

Semantic Model Notebook Fixes (via sempy_labs)

Fix

sempy Code

Remove Calculated Columns

tom.remove_column(table, column)

Remove Calculated Tables

tom.remove_table(table)

Fix Bi-directional Relationships

rel.CrossFilteringBehavior = OneDirection

Fix RLS Expressions

table_permission.FilterExpression = ...

Sync DirectLake Schema

labs.update_direct_lake_model_lakehouse_schema()

Refresh Model

fabric.refresh_dataset(dataset, workspace)


πŸ“‹ Rule Reference

Summary

Category

HIGH

MEDIUM

LOW

INFO

Total

Auto-Fix

🏠 Lakehouse

5

14

9

1

29

14 (3 REST + 11 Notebook)

πŸ—οΈ Warehouse

8

17

12

0

39

12 (SQL DDL)

πŸ“Š Eventhouse

4

7

3

3

20

7 (KQL + dry-run)

πŸ“ Semantic Model

7

14

9

0

32

12 (6 model.bim + 6 Notebook)

Total

24

52

33

4

120

45

#

Rule

Category

Severity

Auto-Fix

LH-001

SQL Endpoint Active

Availability

HIGH

β€”

LH-002

Medallion Architecture Naming

Maintainability

LOW

β€”

LH-003

All Tables Use Delta Format

Performance

HIGH

πŸ““ Notebook

LH-004

Table Maintenance Recommended

Performance

MEDIUM

πŸ”§ REST API

LH-005

No Empty Tables

Data Quality

MEDIUM

πŸ““ Notebook

LH-006

No Over-Provisioned String Columns

Performance

MEDIUM

β€”

LH-007

Key Columns Are NOT NULL

Data Quality

HIGH

β€”

LH-008

No Float/Real Precision Issues

Data Quality

MEDIUM

β€”

LH-009

Column Naming Convention

Maintainability

LOW

πŸ““ Notebook

LH-010

Date Columns Use Proper Types

Data Quality

MEDIUM

β€”

LH-011

Numeric Columns Use Proper Types

Data Quality

MEDIUM

β€”

LH-012

No Excessively Wide Tables

Maintainability

LOW

β€”

LH-013

Schema Has NOT NULL Constraints

Data Quality

MEDIUM

β€”

LH-014

Tables Have Audit Columns

Maintainability

LOW

πŸ““ Notebook

LH-015

Consistent Date Types Per Table

Data Quality

LOW

β€”

LH-S01

No Unprotected Sensitive Data

Security

HIGH

β€”

LH-S02

Large Tables Identified

Performance

INFO

β€”

LH-S03

No Deprecated Data Types

Maintainability

HIGH

β€”

LH-S04

All Tables Have Key Columns

Data Quality

MEDIUM

πŸ““ Notebook

LH-016

Large Tables Are Partitioned

Performance

MEDIUM

β€”

LH-017

Regular VACUUM Executed

Maintenance

MEDIUM

πŸ”§ REST API

LH-018

Regular OPTIMIZE Executed

Performance

MEDIUM

πŸ”§ REST API

LH-019

No Small File Problem

Performance

HIGH

πŸ”§ REST API

LH-020

Auto-Optimize Enabled

Performance

MEDIUM

πŸ““ Notebook

LH-021

Retention Policy Configured

Maintenance

LOW

πŸ““ Notebook

LH-022

Delta Log Version Count Reasonable

Performance

LOW

πŸ”§ REST API

LH-023

Low Write Amplification

Performance

MEDIUM

β€”

LH-024

Data Skipping Configured

Performance

LOW

πŸ““ Notebook

LH-025

Z-Order on Large Tables

Performance

MEDIUM

πŸ”§ REST API

#

Rule

Category

Severity

Auto-Fix

WH-001

Primary Keys Defined

Data Quality

HIGH

πŸ”§ SQL

WH-002

No Deprecated Data Types

Maintainability

HIGH

β€”

WH-003

No Float/Real Precision Issues

Data Quality

MEDIUM

β€”

WH-004

No Over-Provisioned Columns

Performance

MEDIUM

β€”

WH-005

Column Naming Convention

Maintainability

LOW

β€”

WH-006

Table Naming Convention

Maintainability

LOW

β€”

WH-007

No SELECT * in Views

Maintainability

LOW

β€”

WH-008

Statistics Are Fresh

Performance

MEDIUM

πŸ”§ SQL

WH-009

No Disabled Constraints

Data Quality

MEDIUM

πŸ”§ SQL

WH-010

Key Columns Are NOT NULL

Data Quality

HIGH

β€”

WH-011

No Empty Tables

Maintainability

MEDIUM

β€”

WH-012

No Excessively Wide Tables

Maintainability

MEDIUM

β€”

WH-013

Consistent Date Types

Data Quality

LOW

β€”

WH-014

Foreign Keys Defined

Maintainability

MEDIUM

β€”

WH-015

No Large BLOB Columns

Performance

MEDIUM

β€”

WH-016

Tables Have Audit Columns

Maintainability

LOW

πŸ”§ SQL

WH-017

No Circular Foreign Keys

Data Quality

HIGH

β€”

WH-018

Sensitive Data Protected

Security

HIGH

πŸ”§ SQL

WH-019

Row-Level Security

Security

MEDIUM

β€”

WH-020

Minimal db_owner Privileges

Security

MEDIUM

β€”

WH-021

No Over-Complex Views

Maintainability

LOW

β€”

WH-022

Minimal Cross-Schema Dependencies

Maintainability

LOW

β€”

WH-023

No Very Slow Queries

Performance

HIGH

β€”

WH-024

No Frequently Slow Queries

Performance

HIGH

β€”

WH-025

No Recent Query Failures

Reliability

MEDIUM

β€”

WH-026

AUTO_UPDATE_STATISTICS Enabled

Performance

HIGH

πŸ”§ SQL

WH-027

Result Set Caching Enabled

Performance

MEDIUM

πŸ”§ SQL

WH-028

Snapshot Isolation Enabled

Concurrency

MEDIUM

πŸ”§ SQL

WH-029

Page Verify CHECKSUM

Reliability

MEDIUM

πŸ”§ SQL

WH-030

ANSI Settings Correct

Standards

LOW

πŸ”§ SQL

WH-031

Database ONLINE

Availability

HIGH

β€”

WH-032

All Tables Have Statistics

Performance

MEDIUM

πŸ”§ SQL

WH-033

Optimal Data Types

Performance

MEDIUM

β€”

WH-034

No Near-Empty Tables

Maintainability

LOW

β€”

WH-035

Stored Procedures Documented

Maintainability

LOW

β€”

WH-036

NOT NULL Columns Have Defaults

Data Quality

MEDIUM

πŸ”§ SQL

WH-037

Consistent String Types

Maintainability

LOW

β€”

WH-038

Schemas Are Documented

Maintainability

LOW

β€”

WH-039

Query Performance Healthy

Performance

MEDIUM

β€”

#

Rule

Category

Severity

Auto-Fix

EH-001

Query Endpoint Available

Availability

HIGH

β€”

EH-002

No Extent Fragmentation

Performance

HIGH

πŸ”§ KQL

EH-003

Good Compression Ratio

Performance

MEDIUM

β€”

EH-004

Caching Policy Configured

Performance

MEDIUM

πŸ”§ KQL

EH-005

Retention Policy Configured

Data Management

MEDIUM

πŸ”§ KQL

EH-006

Materialized Views Healthy

Reliability

HIGH

πŸ”§ KQL

EH-007

Data Is Fresh

Data Quality

MEDIUM

β€”

EH-008

No Slow Query Patterns

Performance

HIGH

β€”

EH-009

No Recent Failed Commands

Reliability

MEDIUM

β€”

EH-010

No Ingestion Failures

Reliability

HIGH

β€”

EH-011

Streaming Ingestion Config

Performance

INFO

β€”

EH-012

Continuous Exports Healthy

Reliability

MEDIUM

β€”

EH-013

Hot Cache Coverage

Performance

MEDIUM

β€”

EH-014

Ingestion Batching Configured

Performance

LOW

πŸ”§ KQL

EH-015

Update Policies Configured

Data Management

INFO

β€”

EH-016

Partitioning on Large Tables

Performance

MEDIUM

πŸ”§ KQL

EH-017

Merge Policy Configured

Performance

LOW

πŸ”§ KQL

EH-018

Encoding Policy for Poorly Compressed

Performance

MEDIUM

β€”

EH-019

Row Order Policy

Performance

LOW

β€”

EH-020

Stored Functions Inventory

Data Management

INFO

β€”

#

Rule

Category

Severity

Auto-Fix

SM-001

Avoid IFERROR Function

DAX

MEDIUM

πŸ““ Notebook

SM-002

Use DIVIDE Function

DAX

MEDIUM

πŸ““ Notebook

SM-003

No EVALUATEANDLOG in Production

DAX

HIGH

πŸ““ Notebook

SM-004

Use TREATAS not INTERSECT

DAX

MEDIUM

β€”

SM-005

No Duplicate Measure Definitions

DAX

LOW

β€”

SM-006

Filter by Columns Not Tables

DAX

MEDIUM

πŸ““ Notebook

SM-007

Avoid Adding 0 to Measures

DAX

LOW

β€”

SM-008

Measures Have Documentation

Maintenance

LOW

πŸ”§ model.bim + πŸ““

SM-009

Model Has Tables

Maintenance

HIGH

β€”

SM-010

Model Has Date Table

Performance

MEDIUM

πŸ”§ model.bim

SM-011

Avoid 1-(x/y) Syntax

DAX

MEDIUM

β€”

SM-012

No Direct Measure References

DAX

LOW

β€”

SM-013

Avoid Nested CALCULATE

DAX

MEDIUM

β€”

SM-014

Use SUM Instead of SUMX

DAX

LOW

β€”

SM-015

Measures Have Format String

Formatting

LOW

πŸ”§ model.bim

SM-016

Avoid FILTER(ALL(...))

DAX

MEDIUM

β€”

SM-017

Measure Naming Convention

Formatting

LOW

β€”

SM-018

Reasonable Table Count

Performance

LOW

β€”

SM-B01

No High Cardinality Text Columns

Data Types

HIGH

β€”

SM-B02

No Description/Comment Columns

Data Types

HIGH

β€”

SM-B03

No GUID/UUID Columns

Data Types

HIGH

β€”

SM-B04

No Constant Columns

Data Types

MEDIUM

β€”

SM-B05

No Booleans Stored as Text

Data Types

MEDIUM

β€”

SM-B06

No Dates Stored as Text

Data Types

MEDIUM

β€”

SM-B07

No Numbers Stored as Text

Data Types

MEDIUM

β€”

SM-B08

Integer Keys Not String Keys

Data Types

MEDIUM

β€”

SM-B09

No Excessively Wide Tables

Data Types

MEDIUM

β€”

SM-B10

No Extremely Wide Tables

Data Types

HIGH

β€”

SM-B11

No Multiple High-Cardinality Columns

Data Types

HIGH

β€”

SM-B12

No Single Column Tables

Data Types

LOW

β€”

SM-B13

No High-Precision Timestamps

Data Types

MEDIUM

β€”

SM-B14

No Low Cardinality in Fact Tables

Data Types

LOW

β€”


πŸ—οΈ Architecture

src/
β”œβ”€β”€ index.ts                    MCP server entry point (stdio transport)
β”œβ”€β”€ auth/
β”‚   └── fabricAuth.ts           Azure auth (CLI, browser, device code, SP)
β”œβ”€β”€ clients/
β”‚   β”œβ”€β”€ fabricClient.ts         Fabric REST API + DAX + model.bim CRUD
β”‚   β”œβ”€β”€ sqlClient.ts            SQL via tedious (Lakehouse + Warehouse)
β”‚   β”œβ”€β”€ kqlClient.ts            KQL/Kusto REST API (Eventhouse)
β”‚   β”œβ”€β”€ onelakeClient.ts        OneLake ADLS Gen2 + Delta Log parser
β”‚   └── xmlaClient.ts           XMLA SOAP client (experimental)
└── tools/
    β”œβ”€β”€ ruleEngine.ts           Shared RuleResult type + unified renderer
    β”œβ”€β”€ auth.ts                 auth_login, auth_status, auth_logout
    β”œβ”€β”€ workspace.ts            workspace_list
    β”œβ”€β”€ lakehouse.ts            29 rules + table maintenance
    β”œβ”€β”€ warehouse.ts            39 rules + 12 auto-fixes
    β”œβ”€β”€ eventhouse.ts           20 rules + 7 auto-fixes (with dry-run)
    └── semanticModel.ts        32 rules + 6 auto-fixes (model.bim)

πŸ” Authentication

Method

Use Case

azure_cli

Recommended β€” uses your az login session

interactive_browser

Opens browser for interactive login

device_code

Headless/remote environments

vscode

Uses VS Code Azure account

service_principal

CI/CD (requires tenantId, clientId, clientSecret)

default

Auto-detect best available method

πŸ“„ License

MIT

-
security - not tested
A
license - permissive license
-
quality - not tested

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tmdaidevs/Force-Fabric-MCP-Server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server