Microsoft Fabric MCP Server
Server Quality Checklist
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 6 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Create' implies a write/mutation operation, the description doesn't disclose important behavioral traits: whether this requires specific permissions, what happens on failure, whether notebooks can be overwritten, or any rate limits. For a creation tool with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded with the essential information. Every word earns its place, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a creation/mutation tool with no annotations and no output schema, the description is incomplete. It doesn't explain what happens after creation (e.g., returns notebook ID, success/failure indicators), doesn't mention error conditions, and provides minimal behavioral context. For a tool that creates resources, more information about the operation's behavior and outcomes would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with all three parameters clearly documented in the schema itself. The description adds no additional parameter semantics beyond what the schema already provides. According to scoring rules, when schema coverage is high (>80%), the baseline is 3 even with no parameter information in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create') and resource ('new notebook in Fabric workspace'), making the purpose immediately understandable. It doesn't differentiate from sibling tools, but since none of the listed siblings appear to be notebook creation tools, this isn't a significant gap. The description avoids tautology by specifying what's being created and where.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (like needing workspace access), when not to use it, or what alternatives might exist for similar functionality. The agent must infer usage context solely from the tool name and description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action but lacks details on permissions required, rate limits, whether the query is read-only or modifies data, error handling, or expected response format. This is inadequate for a tool that executes queries.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action and resource, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of executing queries and lack of annotations or output schema, the description is incomplete. It doesn't cover behavioral aspects like safety, performance, or return values, leaving significant gaps for an AI agent to understand how to use this tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema fully documents both parameters ('datasetId' and 'query'). The description adds no additional meaning beyond what's in the schema, such as query syntax examples or dataset ID sourcing. Baseline 3 is appropriate as the schema handles parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Execute a DAX query') and target resource ('on a Power BI dataset'), making the purpose immediately understandable. However, it doesn't differentiate this tool from sibling tools like 'refresh_dataset' or 'get_powerbi_datasets' in terms of specific use cases or scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites (e.g., needing a dataset ID from 'get_powerbi_datasets'), appropriate contexts, or limitations compared to siblings like 'create_notebook' for data analysis.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. 'Refresh' implies a mutation/write operation, but the description doesn't disclose whether this requires specific permissions, whether it's asynchronous/synchronous, what happens to dependent reports, or potential rate limits. For a mutation tool with zero annotation coverage, this is inadequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's appropriately sized for a simple tool with one parameter and gets straight to the point without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is insufficient. It doesn't explain what 'refresh' entails (full/incremental, triggers recalculation), what the response looks like (success/failure indicators), or error conditions. Given the complexity of dataset refresh operations in Power BI, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single 'datasetId' parameter completely. The description adds no additional parameter context beyond what's in the schema (like format examples or where to find dataset IDs). Baseline 3 is appropriate when the schema does all the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('refresh') and resource ('Power BI dataset'), making the tool's purpose immediately understandable. However, it doesn't differentiate this tool from potential sibling tools like 'execute_dax_query' or 'upload_to_datawarehouse' that might also interact with datasets in different ways.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (like needing an existing dataset), when refresh is appropriate versus other dataset operations, or what happens after refresh. With siblings like 'get_powerbi_datasets' and 'execute_dax_query', this gap is significant.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. It states the action ('Upload') which implies a write/mutation operation, but doesn't disclose critical traits like required permissions, whether data is appended/replaced, rate limits, error handling, or what happens on success/failure. This leaves significant gaps for a tool that modifies data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's appropriately sized for a tool with clear purpose and good schema documentation. Every word earns its place by conveying the essential action and target.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a data mutation tool with no annotations and no output schema, the description is incomplete. It doesn't explain what happens after upload (success confirmation, error responses), data format requirements, or system constraints. The agent lacks critical context needed to use this tool effectively in production scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are documented in the schema. The description adds no additional meaning about parameters beyond what's in the schema descriptions. It doesn't explain the relationship between workspaceId/warehouseId/tableName or provide examples of the data array format. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Upload') and target resource ('data to a Fabric Data Warehouse'), providing a specific verb+resource combination. However, it doesn't differentiate from sibling tools like 'create_notebook' or 'execute_dax_query', which are distinct operations but could be related in a data workflow context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, when-not scenarios, or how it relates to sibling tools like 'refresh_dataset' or 'execute_dax_query' in a data pipeline context. The agent must infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool retrieves datasets but doesn't mention critical details like whether it's a read-only operation, if it requires authentication, potential rate limits, or what the return format looks like (e.g., list, pagination). This leaves significant gaps for a tool interacting with a data service like Power BI.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that front-loads the essential information ('Get all Power BI datasets in the workspace') with zero waste. It's appropriately sized for a simple tool with no parameters, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of Power BI operations and the lack of annotations and output schema, the description is incomplete. It doesn't explain what 'datasets' entail, how results are returned (e.g., JSON structure, error handling), or prerequisites like workspace access. For a tool in a data analytics context, more detail is needed to ensure correct usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so there's no need for parameter documentation in the description. The description appropriately doesn't discuss parameters, which is efficient and avoids redundancy. A baseline of 4 is applied since no parameters exist, and the description doesn't add unnecessary information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('all Power BI datasets in the workspace'), making the tool's purpose immediately understandable. However, it doesn't differentiate from potential sibling tools like 'get_workspaces' or 'refresh_dataset', which would require more specificity for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_workspaces' (which might list workspaces rather than datasets) or 'refresh_dataset' (which modifies datasets). It lacks explicit when-to-use or when-not-to-use instructions, leaving the agent to infer context from tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but offers minimal behavioral insight. It states what the tool does but doesn't disclose traits like whether it requires authentication, returns paginated results, includes filtering options, or has rate limits. This leaves significant gaps for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It front-loads the core action and resource, making it easy to parse quickly. Every word contributes directly to understanding the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema), the description is minimally adequate but lacks depth. Without annotations or output schema, it doesn't explain what 'Get all' entails (e.g., format, scope, limitations), leaving the agent to infer behavior from the name alone.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description appropriately doesn't add parameter details, earning a baseline score of 4 for matching the schema's simplicity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'all Fabric/Power BI workspaces', making the purpose unambiguous. It doesn't explicitly differentiate from sibling tools like 'get_powerbi_datasets', but the resource specificity (workspaces vs datasets) provides implicit distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. While the description implies it retrieves workspace information, it doesn't specify use cases, prerequisites, or contrast with sibling tools like 'create_notebook' or 'refresh_dataset' that might operate on workspaces.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/snahrup/microsoft-fabric-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server