Skip to main content
Glama
mbrummerstedt

PowerBI Analyst MCP

execute_dax

Run DAX queries on Power BI datasets to extract data, with results returned as JSON or saved to CSV files for large outputs.

Instructions

Execute a DAX query against a Power BI dataset and return the result rows.

The query must start with EVALUATE (standard DAX query syntax). Results are returned as a JSON array of objects, with column names as keys.

Small results (<= 50 rows) are returned inline as JSON. Large results (> 50 rows) are automatically saved to a CSV file and a compact summary is returned with the file path, column names, row count, and a preview of the first 5 rows. Use read_query_result to page through a saved CSV, or read the file directly.

Every successful execution is logged to a local history file for auditability and cross-session reuse. Use search_query_history to find prior queries. The query_summary parameter makes history search much more effective — always provide it when you can.

Limitations imposed by the Power BI API:

  • Maximum 1,000,000 values or 100,000 rows per query.

  • Rate limit: 120 requests per minute per user.

  • Only DAX is supported; MDX and DMV queries are not.

  • The tenant setting "Dataset Execute Queries REST API" must be enabled.

Tips:

  • Use TOPN or FILTER to limit large result sets.

  • Use SUMMARIZECOLUMNS for aggregated queries.

  • Use CALCULATETABLE for filtered table expressions.

  • Use max_rows to sample a large table without rewriting the DAX.

  • Use result_name to give the saved CSV a meaningful filename.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
workspace_idYes
dataset_idYes
dax_queryYes
max_rowsNo
result_nameNo
query_summaryNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and excels: discloses the 50-row threshold for CSV spillover, API limits (1M values/100K rows, 120 req/min), tenant setting requirements, audit logging behavior, and result format specifics (JSON array vs CSV summary).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with logical sections (purpose, result handling, history, limitations, tips). Front-loaded with the core purpose. Lengthy but justified by complexity and lack of annotations/schema docs; no wasted sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a complex query tool with 0% schema coverage. Covers authentication prerequisites (tenant setting), pagination strategy, rate limiting, and result handling patterns. With output schema present, the dual-mode return explanation (inline JSON vs CSV) provides necessary context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description compensates effectively for 4 of 6 parameters: dax_query (EVALUATE requirement), max_rows (sampling purpose), result_name (CSV naming), and query_summary (history searchability). The workspace_id and dataset_id are implied but not explicitly described.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The opening sentence provides a specific verb (Execute), resource (DAX query against Power BI dataset), and output (result rows). It clearly distinguishes this from metadata siblings like list_tables or list_datasets by focusing on arbitrary DAX query execution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly references sibling tools read_query_result (for paging large CSVs) and search_query_history (for finding prior queries). Also clarifies when not to use (MDX/DMV not supported) and provides DAX pattern tips. Could be improved by contrasting with get_dataset_info for metadata vs data retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mbrummerstedt/powerbi-analyst-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server