Power Automate MCP Server by Flow Studio
Server Details
Debug, build, and manage Power Automate cloud flows with AI agents
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- ninihen1/power-automate-mcp-skills
- GitHub Stars
- 8
- Server Listing
- Flow Studio - Power Automate MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 26 of 29 tools scored. Lowest: 3.1/5.
Most tools have distinct purposes, clearly separated between live API operations and cached store operations, with specific actions like get, list, update, trigger, cancel, and resubmit. However, some overlap exists between get_live_flow_trigger_url and get_store_flow_trigger_url, which serve similar purposes but differ in data source, potentially causing confusion if an agent doesn't carefully note the 'live' vs 'store' distinction.
Tool names follow a highly consistent verb_noun pattern with clear prefixes (e.g., get_live_flow, list_store_flows, update_live_flow). The 'live' and 'store' prefixes effectively differentiate between real-time API calls and cached data, maintaining a predictable structure throughout all 29 tools.
With 29 tools, the count is borderline high for a single server, potentially overwhelming for agents. While the scope covers Power Automate management comprehensively, the number could be streamlined by merging some overlapping functionalities (e.g., combining get_live_flow_runs and get_store_flow_runs into a single tool with a source parameter).
The toolset provides complete coverage for Power Automate flow management, including CRUD operations (create via update_live_flow, read via various get and list tools, update via update_live_flow and update_store_flow, delete implied via state control), lifecycle management (trigger, cancel, resubmit runs), and monitoring (error details, run history). Both live and cached perspectives are well-covered, leaving no obvious gaps for the domain.
Available Tools
29 toolsadd_live_flow_to_solutionAInspect
Migrate a non-solution Power Automate flow into a solution via the admin migrateFlows API. If the flow is already part of a solution, returns an error message without attempting migration. solutionId is optional — omit to migrate into the default solution.
| Name | Required | Description | Default |
|---|---|---|---|
| flowName | Yes | Name (ID) of the flow to migrate into the solution. | |
| solutionId | No | Target solution ID. Omit to migrate into the default solution. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the migration action, error handling for already-solution flows, and the optional parameter behavior. However, it doesn't mention authentication requirements, rate limits, or what happens to the original flow post-migration, leaving some behavioral aspects uncovered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the core purpose and error condition, the second clarifies the optional parameter. Every sentence adds essential information with zero waste, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description does well by covering purpose, usage conditions, and key behaviors. However, it lacks details on the migration outcome (e.g., what happens to the original flow, success/failure responses) and doesn't mention prerequisites like permissions or environment access, leaving some contextual gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds minimal value beyond the schema by mentioning that 'solutionId is optional — omit to migrate into the default solution,' which slightly reinforces what the schema says about 'solutionId.' This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Migrate a non-solution Power Automate flow into a solution') and resource ('via the admin migrateFlows API'), distinguishing it from sibling tools like 'update_live_flow' or 'set_live_flow_state' which modify flow properties rather than moving flows between solutions. It explicitly mentions the migration context and API method.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use ('Migrate a non-solution Power Automate flow') and when not to use ('If the flow is already part of a solution, returns an error message without attempting migration'). It also clarifies the optional nature of 'solutionId' and the default behavior when omitted, giving clear context for parameter usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cancel_live_flow_runAInspect
Cancel a currently running Power Automate flow run via the live PA API. Use after get_live_flow_runs to obtain the run name. Only runs with status "Running" can be cancelled.
| Name | Required | Description | Default |
|---|---|---|---|
| runName | Yes | Run identifier (name field from get_live_flow_runs). | |
| flowName | Yes | Name (ID) of the flow. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates that this is a destructive action (cancelling a run) and specifies constraints (only for 'Running' status), but lacks details on permissions, error handling, or response format. It adds useful context but doesn't fully compensate for the absence of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage guidance and constraints in two concise sentences. Every sentence earns its place by adding critical information without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a destructive operation with no annotations or output schema), the description is reasonably complete. It covers purpose, prerequisites, and constraints, but could improve by mentioning authentication needs or potential side effects. It's adequate for a cancellation tool but has minor gaps in behavioral context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters. The description adds minimal value by mentioning 'runName' comes from 'get_live_flow_runs', but doesn't elaborate on parameter interactions or usage beyond what the schema provides. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Cancel') and target resource ('a currently running Power Automate flow run via the live PA API'), distinguishing it from siblings like 'resubmit_live_flow_run' or 'set_live_flow_state'. It precisely defines the verb and resource without being vague or tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides when to use this tool ('Use after get_live_flow_runs to obtain the run name') and when not to ('Only runs with status "Running" can be cancelled'), offering clear prerequisites and exclusions. It also implicitly distinguishes from alternatives like 'resubmit_live_flow_run' by focusing on cancellation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_live_connector_experimentalAInspect
Fetch a connector's live OpenAPI metadata from the Power Platform API and return the raw response body as-is. No normalization or field mapping is applied.
| Name | Required | Description | Default |
|---|---|---|---|
| connectorName | Yes | Connector logical name (for example shared_teams) or full api id path (/providers/Microsoft.PowerApps/apis/shared_teams). | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly indicates this is a read operation ('fetch') and specifies that no normalization or field mapping is applied, which is useful context. However, it lacks details on error handling, rate limits, authentication needs, or response format, leaving gaps for a tool interacting with an external API.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose and key behavioral trait (raw response). It is front-loaded with the main action and resource, with no redundant or wasted words, making it highly concise and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (fetching from an external API), lack of annotations, and no output schema, the description is partially complete. It covers the core purpose and raw data return but omits details on error cases, response structure, or API-specific behaviors, which could hinder an agent's ability to handle edge cases effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, fully documenting both required parameters. The description adds no additional parameter-specific information beyond what the schema provides, such as examples or constraints. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('fetch'), target resource ('connector's live OpenAPI metadata'), and source ('Power Platform API'), distinguishing it from sibling tools that focus on flows, runs, connections, or store resources. It explicitly mentions the raw, unprocessed nature of the response, which sets it apart from tools that might normalize data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying it fetches 'live' metadata and returns raw data, suggesting it's for direct API access rather than processed views. However, it doesn't explicitly state when to use this tool versus alternatives (e.g., other 'get_live_' tools for flows or store variants), nor does it mention prerequisites or exclusions, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_live_flowAInspect
Fetch the full native Power Automate flow JSON from the PA API, including the complete flow definition (triggers, actions, parameters, outputs). Returns the raw properties object exactly as the PA API returns it. Use this to inspect the full definition before calling update_live_flow with a modified definition.
| Name | Required | Description | Default |
|---|---|---|---|
| flowName | Yes | Name (ID) of the flow. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it fetches data from an API, returns raw JSON properties exactly as the API provides, and is intended for inspection purposes. However, it doesn't mention potential limitations like rate limits, authentication requirements, or error handling, which would be helpful for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the purpose and output, the second provides usage guidance. Every sentence adds value without redundancy, and it's front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with 2 parameters and no output schema, the description is largely complete: it explains what the tool does, when to use it, and the nature of the output. However, without annotations or an output schema, it could benefit from more details on the return structure or error cases, though the 'raw properties object' hint is helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with clear documentation for both required parameters ('flowName' and 'environmentName'). The description doesn't add any additional parameter semantics beyond what the schema provides, such as format examples or constraints. The baseline score of 3 is appropriate since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Fetch the full native Power Automate flow JSON from the PA API') and resource ('flow'), distinguishing it from siblings like 'list_live_flows' (which lists flows) or 'get_live_flow_http_schema' (which fetches only the HTTP schema). It explicitly mentions what is included ('complete flow definition with triggers, actions, parameters, outputs').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Use this to inspect the full definition before calling update_live_flow with a modified definition'), clearly indicating its purpose in a workflow context and distinguishing it from alternatives like 'update_live_flow' or other get_* tools that fetch different data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_live_flow_http_schemaAInspect
Inspect the HTTP interface of a Power Automate Request-triggered flow: returns the JSON schema the trigger URL expects as the POST body, any required headers, the HTTP method, and the JSON schema(s) defined on any Response action(s) in the flow. All information is read from the live flow definition via the PA API — no test call is made to the trigger URL. Use this before calling trigger_live_flow to understand what body to send.
| Name | Required | Description | Default |
|---|---|---|---|
| flowName | Yes | Name (ID) of the flow. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a read operation (no test call is made), it retrieves information from the live flow definition via PA API, and it returns specific HTTP interface details. However, it doesn't mention potential limitations like rate limits, authentication requirements, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with two sentences that each serve clear purposes: the first explains what the tool does and returns, the second provides crucial usage context. There's no wasted language, and key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only inspection tool with no annotations and no output schema, the description provides substantial context about what information is returned and how it should be used. It could be more complete by mentioning the format of the return data or potential error scenarios, but it covers the essential purpose and usage well given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters adequately. The description doesn't add any parameter-specific information beyond what's in the schema, maintaining the baseline score of 3 for high schema coverage situations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Inspect the HTTP interface of a Power Automate Request-triggered flow') and the exact resources returned (JSON schema for POST body, required headers, HTTP method, and response action schemas). It distinguishes from sibling tools by explicitly contrasting with trigger_live_flow and mentioning it reads from the live flow definition via the PA API without making test calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Use this before calling trigger_live_flow to understand what body to send') and distinguishes it from alternatives by stating it inspects without making test calls. It clearly establishes the prerequisite relationship with trigger_live_flow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_live_flow_run_action_outputsAInspect
Download inputs and outputs for actions in a flow run via SAS blob links from the live Power Automate API. Without actionName: returns top-level actions (optionally filtered by name). With actionName: calls the PA repetitions endpoint to return every execution of that action across all foreach iterations. Each repetition record includes repetitionIndexes (scope name + itemIndex per nesting level), status, error, and the resolved inputs/outputs blobs. Use iterationIndex to pin to a single iteration (matched against the innermost repetitionIndexes[].itemIndex); omit it to return all repetitions.
| Name | Required | Description | Default |
|---|---|---|---|
| top | No | Max actions or repetitions to return. Paginates automatically. Omit for all. | |
| runName | Yes | Run identifier (name field from get_live_flow_runs). | |
| flowName | Yes | Name (ID) of the flow. | |
| actionName | No | Action name. Without iterationIndex: returns all repetitions of this action across every foreach iteration. With iterationIndex: returns the single repetition matching that iteration. Omit entirely for top-level action list. | |
| iterationIndex | No | Zero-based foreach iteration index. Matched against the innermost repetitionIndexes[].itemIndex in the PA repetition record. Only meaningful when actionName is also set. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it's a read operation (implied by 'download'), handles pagination automatically via the 'top' parameter, and details the structure of returned data (repetition records with indexes, status, error, blobs). However, it lacks explicit mention of rate limits, authentication needs, or error handling specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose. Each sentence adds necessary detail without redundancy, such as explaining parameter dependencies and data structures. However, it could be slightly more streamlined by combining some clauses for better flow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, no output schema, no annotations), the description is adequate but has gaps. It covers purpose, usage, and parameter semantics well, but lacks details on output format (e.g., structure of returned blobs), error cases, or performance considerations, which would help an agent handle results more effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds significant value by explaining parameter interactions: it clarifies that actionName and iterationIndex work together, with iterationIndex only meaningful when actionName is set, and that omitting actionName returns top-level actions. This enhances understanding beyond the schema's individual parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('download inputs and outputs') and resources ('actions in a flow run'), and distinguishes it from siblings by specifying it retrieves data via SAS blob links from the live Power Automate API. It explicitly differentiates from tools like get_live_flow_runs (which lists runs) or get_live_flow_run_error (which focuses on errors).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the tool vs alternatives: it details behavior 'without actionName' (returns top-level actions) and 'with actionName' (calls repetitions endpoint), and explains how iterationIndex modifies results. This gives clear context for selecting parameters based on the user's goal, such as filtering by name or pinning to a single iteration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_live_flow_run_errorAInspect
Fetch error details for a specific flow run from the live Power Automate API. Lists every failed action with its error code and message to help diagnose what went wrong.
| Name | Required | Description | Default |
|---|---|---|---|
| top | No | Max actions to return. Paginates automatically. Omit for all. | |
| runName | Yes | Run identifier (name field from get_live_flow_runs). | |
| flowName | Yes | Name (ID) of the flow. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the output format ('lists every failed action with its error code and message') and hints at pagination ('paginates automatically'), which adds useful context beyond the input schema. However, it doesn't cover aspects like authentication needs, rate limits, error handling, or whether this is a read-only operation (implied by 'fetch' but not explicit). The description compensates partially but leaves gaps for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the core purpose ('fetch error details') and efficiently adds context about output and diagnostic use. Every word earns its place, with no redundancy or unnecessary elaboration, making it highly concise and effective for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (fetching error details for flow runs), no annotations, and no output schema, the description does a good job of covering purpose, output format, and usage context. However, it lacks details on behavioral aspects like authentication or error handling, and without an output schema, it doesn't fully describe return values (e.g., structure of the error list). It's mostly complete but has minor gaps in behavioral transparency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the schema already documents all parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema (e.g., it doesn't explain parameter relationships or provide examples). According to the rules, with high schema coverage, the baseline is 3 even without param info in the description, which is met here.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('fetch error details') and resource ('for a specific flow run from the live Power Automate API'), distinguishing it from siblings like 'get_live_flow_runs' (which lists runs) or 'get_live_flow_run_action_outputs' (which fetches outputs rather than errors). It explicitly mentions the output content ('lists every failed action with its error code and message') and purpose ('to help diagnose what went wrong'), making the tool's function unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage ('to help diagnose what went wrong') and implies it should be used after identifying a failed run, but it does not explicitly state when not to use it or name alternatives. For example, it doesn't contrast with 'get_store_flow_errors' (for store flows) or specify prerequisites like needing a run name from 'get_live_flow_runs'. This is helpful but lacks explicit exclusions or named sibling alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_live_flow_runsAInspect
Fetch live run history for a flow directly from the Power Automate API using impersonation — not the cached store. Returns run name, status, startTime, endTime, trigger name/code, and any top-level error.
| Name | Required | Description | Default |
|---|---|---|---|
| top | No | Max runs to return. Paginates automatically. Default 30. | |
| flowName | Yes | Name (ID) of the flow. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a read operation ('Fetch'), uses impersonation for authentication, bypasses caching for real-time data, and specifies the return format (run name, status, etc.). However, it doesn't mention rate limits, error handling, or pagination details beyond the 'top' parameter, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently communicates purpose, usage context, and return values without any wasted words. It's front-loaded with the core action and resource, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does a good job covering the tool's behavior and return format. However, it could be more complete by mentioning potential errors, authentication requirements beyond 'impersonation', or how pagination works with the 'top' parameter. For a read tool with 3 parameters, it's largely adequate but has minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description doesn't add any parameter-specific semantics beyond what the schema provides (e.g., it doesn't clarify format for 'environmentName' or 'flowName'). Baseline 3 is appropriate as the schema does the heavy lifting, though no extra value is added.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Fetch live run history'), resource ('for a flow'), and source ('directly from the Power Automate API using impersonation — not the cached store'), distinguishing it from sibling tools like 'get_store_flow_runs' which presumably uses cached data. It provides a verb+resource+scope combination that is precise and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('directly from the Power Automate API using impersonation — not the cached store'), providing a clear alternative to cached store tools like 'get_store_flow_runs'. This gives the agent explicit guidance on selecting this tool over its sibling for real-time, non-cached data retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_live_flow_trigger_urlAInspect
Fetch the live trigger URL and method for an HTTP-triggered flow directly from the Power Automate API. Unlike get_flow_trigger_url this calls the PA listCallbackUrl endpoint so the URL is always current. Returns triggerName, triggerType, triggerKind, triggerMethod (e.g. POST) and triggerUrl. For non-HTTP triggers, triggerMethod and triggerUrl are null.
| Name | Required | Description | Default |
|---|---|---|---|
| flowName | Yes | Name (ID) of the flow. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool fetches data from a specific API endpoint ('listCallbackUrl'), returns specific fields (triggerName, triggerType, etc.), and handles edge cases (null values for non-HTTP triggers). However, it doesn't mention potential error conditions, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: first states the purpose and differentiation, second specifies the return values, third covers edge cases. Every sentence adds essential information with zero waste, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with 2 parameters and no output schema, the description provides good context: it explains what the tool does, how it differs from alternatives, what it returns, and limitations. However, without annotations or output schema, it could benefit from more detail about response format or error handling to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any additional meaning about the parameters beyond what's in the schema. According to scoring rules, when schema coverage is high (>80%), the baseline is 3 even with no parameter information in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Fetch the live trigger URL and method'), resource ('HTTP-triggered flow'), and source ('directly from the Power Automate API'). It explicitly distinguishes this tool from its sibling 'get_flow_trigger_url' by explaining it calls a different endpoint ('listCallbackUrl') for current URLs, providing clear differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: it contrasts with 'get_flow_trigger_url' by specifying this tool calls a different API endpoint for always-current URLs. It also states when not to use it ('For non-HTTP triggers, triggerMethod and triggerUrl are null'), giving clear context for appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_store_flowAInspect
[Requires Pro+ plan] Get full details for a single Power Automate flow from the Power Clarity cache. Includes trigger URL, owners, state, run statistics, and governance metadata. Data is from the stored snapshot — not live from the Power Automate API.
| Name | Required | Description | Default |
|---|---|---|---|
| flowName | Yes | Name (ID) of the flow. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses key behavioral traits: it requires a Pro+ plan (auth/access constraint), returns cached/snapshot data (not live, implying potential staleness), and lists specific data included (trigger URL, owners, etc.). It doesn't mention rate limits or error handling, but covers essential usage context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with key information (plan requirement, purpose, data source) in three concise sentences with zero waste. Each sentence earns its place by clarifying usage, scope, and limitations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is reasonably complete for a read-only tool: it covers purpose, prerequisites, data source, and included details. It could mention return format or error cases, but for a 2-parameter tool with clear scope, it's largely adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('flowName' and 'environmentName'). The description doesn't add meaning beyond what the schema provides (e.g., no format examples or constraints), meeting the baseline of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get full details') and resource ('a single Power Automate flow from the Power Clarity cache'), distinguishing it from siblings like 'get_live_flow' (live API) and 'get_store_flow_summary' (summary vs. full details). It specifies the scope includes trigger URL, owners, state, run statistics, and governance metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('[Requires Pro+ plan]', 'Data is from the stored snapshot — not live from the Power Automate API'), providing clear alternatives (e.g., 'get_live_flow' for live data) and exclusions (not for real-time data).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_store_flow_errorsAInspect
[Requires Pro+ plan] Get cached failed run history for a flow from the Power Clarity store (convenience wrapper around get_store_flow_runs with status=Failed). Returns failedActions and remediation hint per run to help diagnose issues. Data is from the stored snapshot — not live from the Power Automate API.
| Name | Required | Description | Default |
|---|---|---|---|
| flowName | Yes | Name (ID) of the flow. | |
| startTime | No | ISO 8601 start of the time window (default: 7 days ago). | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and adds valuable behavioral context beyond the input schema: it discloses the Pro+ plan requirement (auth/access needs), clarifies it's a convenience wrapper (implementation detail), describes the return format ('failedActions and remediation hint per run'), and notes the data source is cached ('stored snapshot — not live'). It doesn't mention rate limits or destructive effects, but covers key operational aspects adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with key information (plan requirement, purpose, sibling relation) in three efficient sentences with zero waste. Each sentence earns its place: the first covers prerequisites and core function, the second specifies returns and purpose, and the third clarifies data source limitations. It's appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is largely complete: it covers purpose, usage, behavioral traits, and data source. However, without an output schema, it could more explicitly detail the return structure (e.g., format of 'failedActions'), though it does mention key components. The Pro+ plan requirement and cached nature are well-addressed, making it sufficient for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters (flowName, startTime, environmentName) with descriptions. The description adds no additional parameter semantics beyond what the schema provides, such as format details or usage examples. The baseline score of 3 is appropriate since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get cached failed run history') and resource ('for a flow from the Power Clarity store'), distinguishing it from siblings like 'get_store_flow_runs' by specifying it's a convenience wrapper with status=Failed. It explicitly mentions what it returns ('failedActions and remediation hint per run') and the data source ('stored snapshot — not live from the Power Automate API').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: it states when to use this tool ('[Requires Pro+ plan]' as a prerequisite, 'to help diagnose issues' for context) and distinguishes it from alternatives by naming the sibling tool ('convenience wrapper around get_store_flow_runs with status=Failed'). It also clarifies when not to use it by noting the data is 'not live from the Power Automate API,' implying alternatives for real-time data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_store_flow_runsAInspect
[Requires Pro+ plan] Get cached run history for a flow from the Power Clarity store. Defaults to the last 7 days. Returns startTime, endTime, status, duration (seconds), failedActions, and remediation hint per run. Data is from the stored snapshot — not live from the Power Automate API.
| Name | Required | Description | Default |
|---|---|---|---|
| top | No | Maximum number of run rows to return (default 5000). | |
| status | No | Filter by status. Only the first value is used in the OData filter. | |
| endTime | No | ISO 8601 end of the time window. | |
| flowName | Yes | Name (ID) of the flow. | |
| startTime | No | ISO 8601 start of the time window (default: 7 days ago). | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the Pro+ plan requirement, default time window, return data fields (startTime, endTime, etc.), and that data is cached/snapshot-based. It doesn't mention rate limits, pagination, or error handling, but covers essential operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with critical information (plan requirement, purpose, defaults, return fields, data source) in three efficient sentences. Every sentence adds value without redundancy, making it easy for an AI agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does well to cover purpose, usage, behavioral traits, and return data structure. It could improve by explicitly mentioning the output format (e.g., JSON array) or error cases, but it provides sufficient context for a read-only tool with clear sibling differentiation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal parameter semantics beyond the schema—it implies time filtering with 'Defaults to the last 7 days' but doesn't detail parameter interactions or constraints. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get cached run history for a flow from the Power Clarity store.' It specifies the resource (flow run history) and distinguishes it from siblings like 'get_live_flow_runs' by emphasizing the data source is 'from the stored snapshot — not live from the Power Automate API.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: it states '[Requires Pro+ plan]' as a prerequisite, mentions the default time window (last 7 days), and clarifies when to use this tool versus alternatives by noting the data source is cached/store-based, not live. This helps differentiate it from sibling tools like 'get_live_flow_runs'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_store_flow_summaryAInspect
[Requires Pro+ plan] Get aggregated run statistics for a flow from the Power Clarity cache: total runs, success count, failure count, success rate, fail rate, and average/max duration over a time window. Data is from the stored snapshot — not live from the Power Automate API.
| Name | Required | Description | Default |
|---|---|---|---|
| endTime | No | ISO 8601 end of the time window. | |
| flowName | Yes | Name (ID) of the flow. | |
| startTime | No | ISO 8601 start of the time window (default: 7 days ago). | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it requires a Pro+ plan (access control), uses cached data (performance/currency implications), and provides aggregated statistics (not detailed runs). It doesn't mention rate limits, error handling, or data freshness details, keeping it from a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states purpose, requirements, and key metrics; the second clarifies data source limitations. Every element earns its place with zero redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does well by covering purpose, usage context, and behavioral traits. It lacks details on output format (e.g., structure of returned statistics) and potential error cases, which would be helpful for a tool with aggregated data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are fully documented in the schema. The description adds no additional parameter semantics beyond implying time window usage, matching the baseline score of 3 when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get aggregated run statistics for a flow') and resource ('from the Power Clarity cache'), listing the exact metrics provided (total runs, success count, etc.). It distinguishes from siblings by specifying it uses stored snapshot data rather than live API data, differentiating from tools like 'get_live_flow_runs'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('[Requires Pro+ plan]' for access, 'over a time window' for temporal scope) and when not to use it ('Data is from the stored snapshot — not live from the Power Automate API'), clearly differentiating it from live data alternatives among sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_store_flow_trigger_urlAInspect
[Requires Pro+ plan] Get the trigger URL and trigger type for an HTTP-triggered flow from the Power Clarity cache. Read directly from the stored flow record — no live Power Automate API call is made. Use get_live_flow_trigger_url for a guaranteed-fresh URL.
| Name | Required | Description | Default |
|---|---|---|---|
| flowName | Yes | Name (ID) of the flow. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure and does so effectively: it reveals the Pro+ plan requirement, clarifies that it reads from cache without live API calls, and specifies it's for HTTP-triggered flows. It doesn't mention rate limits, error handling, or response format, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences with zero waste: the first states requirements and purpose, the second clarifies the operational behavior, and the third provides the alternative. Every sentence earns its place by adding distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with 100% schema coverage but no output schema, the description provides strong context about data source (cache vs live), prerequisites (Pro+ plan), and sibling differentiation. It doesn't describe the return format or potential cache staleness implications, but covers the essential operational context well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters adequately. The description doesn't add any parameter-specific information beyond what's in the schema, but doesn't need to given the comprehensive schema coverage. Baseline 3 is appropriate when the schema handles parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get the trigger URL and trigger type') and resource ('for an HTTP-triggered flow from the Power Clarity cache'), and explicitly distinguishes it from its sibling tool 'get_live_flow_trigger_url' by noting it reads from cache rather than making live API calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: it specifies '[Requires Pro+ plan]' as a prerequisite, indicates it's for cached data, and explicitly names 'get_live_flow_trigger_url' as the alternative for fresh URLs, creating clear decision criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_store_makerAInspect
[Requires Pro+ plan] Get details for a single maker from the Power Clarity cache by their key (usually the AAD object ID). Includes flow/app counts and whether the account has been deleted.
| Name | Required | Description | Default |
|---|---|---|---|
| makerKey | Yes | Maker RowKey (AAD object ID of the user). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the Pro+ plan requirement and that it retrieves from a cache (which implies potentially stale data), but lacks details on error handling, rate limits, or authentication needs beyond the plan requirement. The description doesn't contradict any annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first covers purpose and prerequisites, the second specifies included details. Every sentence adds value with no wasted words, and it's front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read tool with no output schema, the description is reasonably complete—it covers the purpose, prerequisites, and return details. However, it could better address behavioral aspects like caching implications or error scenarios to fully compensate for the lack of annotations and output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'makerKey' fully documented in the schema. The description adds context that the key is 'usually the AAD object ID' and mentions what details are included (flow/app counts, deletion status), but doesn't provide additional syntax or format details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get details'), resource ('a single maker from the Power Clarity cache'), and key identifier ('by their key'). It distinguishes from sibling tools like 'list_store_makers' by specifying retrieval of a single maker rather than listing multiple makers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit context for when to use this tool ('[Requires Pro+ plan]' and 'by their key'), but does not mention when not to use it or name specific alternatives. It implies usage for retrieving details of a specific maker rather than listing all makers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_live_connectionsAInspect
List Power Platform connections in an environment directly from the Power Automate API — not the cached store. Returns id, displayName, connectorName, environment, createdBy (full object), authenticatedUser, statuses, overallStatus, createdTime, expirationTime, and connectionParameters for each connection.
| Name | Required | Description | Default |
|---|---|---|---|
| top | No | Max connections to return. Paginates automatically. Omit for all. | |
| environmentName | No | Name of the Power Platform environment. Omit to list connections across all environments. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It specifies the data source ('directly from the Power Automate API — not the cached store') and lists return fields, which adds context. However, it does not mention potential side effects, authentication needs, rate limits, or error handling, leaving gaps for a tool that likely involves API calls.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose, source, and return fields without unnecessary details. It is front-loaded with the core action and avoids redundancy, making it highly concise and effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description does a decent job by specifying the data source and return fields. However, it does not cover behavioral aspects like pagination details (implied by 'top' but not explained), error scenarios, or authentication requirements, which are important for a tool interacting with an API. This leaves some contextual gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with clear documentation for 'top' and 'environmentName' parameters. The description does not add any additional parameter semantics beyond what the schema provides, such as default values or usage examples. Since schema coverage is high, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List Power Platform connections'), specifies the source ('directly from the Power Automate API — not the cached store'), and distinguishes it from sibling tools like 'list_store_connections' by emphasizing the 'live' vs 'store' distinction. It provides a comprehensive list of returned fields, making the purpose explicit and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly suggests usage by contrasting with 'cached store' and listing specific return fields, which helps differentiate it from 'list_store_connections'. However, it lacks explicit guidance on when to use this tool versus alternatives like 'list_store_connections' or 'list_live_environments', and does not mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_live_environmentsAInspect
List all Power Platform environments directly from the Power Automate API — not the cached store. Returns id, displayName, sku, location, and state for each environment visible to the impersonated service account.
| Name | Required | Description | Default |
|---|---|---|---|
| top | No | Max environments to return. Paginates automatically. Omit for all. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates that this is a read operation (list), specifies the data source (Power Automate API vs cached store), describes the return format (id, displayName, sku, location, state), and mentions visibility constraints ('visible to the impersonated service account'). It doesn't mention rate limits, pagination details beyond the 'top' parameter, or error conditions, but provides substantial behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently communicates purpose, data source, return fields, and visibility constraints. Every element serves a clear purpose with zero wasted words, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only list tool with no output schema, the description provides excellent context: it specifies the data source, return fields, and visibility scope. It could potentially mention pagination behavior more explicitly or error conditions, but given the tool's relative simplicity and the comprehensive parameter documentation in the schema, it's quite complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the 'top' parameter. The description doesn't add any parameter-specific information beyond what's in the schema. The baseline score of 3 is appropriate when the schema does all the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and resource 'Power Platform environments', specifies the data source 'directly from the Power Automate API — not the cached store', and distinguishes from sibling tools like 'list_store_environments' by emphasizing the 'live' vs 'store' distinction. It provides specific details about what fields are returned (id, displayName, sku, location, state) and visibility scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('List all Power Platform environments directly from the Power Automate API — not the cached store'), which clearly distinguishes it from 'list_store_environments' that presumably uses cached data. However, it doesn't provide explicit when-not-to-use guidance or mention alternatives beyond the implied store vs live distinction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_live_flowsAInspect
List Power Automate flows in an environment directly from the Power Automate API — not the cached store. Returns id, displayName, state, triggerType, and lastModifiedTime for each flow. mode=owner (default): uses the user-scoped endpoint — returns only flows owned by the impersonated account, includes full definition/triggerType. mode=admin: uses the admin-scoped endpoint — returns all flows in the environment, requires an admin account. For large environments pagination is time-bounded — check nextLink in the response and pass it as continuationUrl to retrieve the next batch.
| Name | Required | Description | Default |
|---|---|---|---|
| top | No | Max flows to return. Paginates automatically. Omit for all. | |
| mode | No | owner (default): user-scoped endpoint, owned flows only with full definitions. admin: admin-scoped endpoint, all flows. | |
| timeoutSeconds | No | Stop collecting pages after this many seconds and return a nextLink for the next batch. Default 25. Max 55. | |
| continuationUrl | No | nextLink value returned by a previous call. Pass to resume pagination from where the last call stopped. Must match the same mode as the original call. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool accesses live data (not cached), includes mode-specific scopes and requirements, and details pagination mechanics (time-bounded, nextLink usage). However, it lacks information on error handling or rate limits, which are common concerns for API tools.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by essential details in a logical flow. Each sentence adds value: the first defines the tool, the second explains modes, and the third covers pagination. There is no redundant or verbose language, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, no annotations, no output schema), the description is largely complete. It covers purpose, usage, key behaviors, and pagination. However, it lacks details on the output format beyond listed fields and does not mention potential errors or edge cases, leaving some gaps for an AI agent to infer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline score is 3. The description adds minimal parameter semantics beyond the schema: it clarifies default values for 'mode' and 'timeoutSeconds', and explains the interaction between 'continuationUrl' and 'mode'. This provides some additional context but does not significantly enhance the schema's documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('Power Automate flows in an environment directly from the Power Automate API'), distinguishing it from sibling tools like list_store_flows. It specifies the data source ('not the cached store') and the returned fields, making the purpose specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool vs. alternatives: it contrasts with 'cached store' tools (implied by 'directly from the Power Automate API') and distinguishes between 'owner' and 'admin' modes, including prerequisites ('requires an admin account'). It also mentions pagination for large environments, offering practical usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_store_connectionsBInspect
[Requires Pro+ plan] List all Power Platform connections from the Power Clarity cache.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions a plan requirement, which is useful behavioral context. However, it doesn't disclose other important traits like whether this is a read-only operation, potential rate limits, cache behavior implications, or what the output format looks like (especially with no output schema).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the plan requirement and clearly states the action. Every word earns its place with no redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. While it mentions the plan requirement, it doesn't explain what 'Power Clarity cache' means, how results are returned (format, pagination), or differentiate this from similar tools. For a list operation with zero structured metadata, more context would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100% (though empty). The description doesn't need to explain parameters, so it appropriately focuses on other aspects. A baseline of 4 is appropriate for zero-parameter tools when the description provides other useful information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('all Power Platform connections from the Power Clarity cache'), making the purpose unambiguous. However, it doesn't explicitly differentiate from its sibling 'list_live_connections' beyond the 'store' vs 'live' naming, which is implied but not stated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a prerequisite ('[Requires Pro+ plan]'), which provides some usage context. However, it doesn't specify when to use this tool versus alternatives like 'list_live_connections' or other list_* siblings, leaving the agent to infer based on naming conventions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_store_environmentsAInspect
[Requires Pro+ plan] List all Power Platform environments from the Power Clarity cache.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds value by specifying the data source ('Power Clarity cache') and access requirement ('Pro+ plan'), but doesn't describe output format, pagination, or error handling. For a read-only list tool with zero annotation coverage, this is minimally adequate but lacks detail on behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It front-loads the access requirement and clearly states the purpose, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description covers the essential purpose and access requirement. However, without annotations or output schema, it doesn't provide details on what the list contains (e.g., environment attributes) or how results are structured, leaving gaps for a list operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, and schema description coverage is 100% (empty schema). The description doesn't need to explain parameters, so it appropriately focuses on other aspects. No parameter information is required, making this sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List all Power Platform environments') and specifies the data source ('from the Power Clarity cache'), which distinguishes it from generic listing tools. However, it doesn't explicitly differentiate from sibling tools like 'list_live_environments' beyond the 'store' vs 'live' naming convention.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a prerequisite ('[Requires Pro+ plan]'), which provides some context for when this tool is accessible. However, it doesn't explain when to use this tool versus alternatives like 'list_live_environments' or other environment-related tools, leaving usage decisions to inference from naming patterns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_store_flowsAInspect
[Requires Pro+ plan] List Power Automate flows from the Power Clarity cache. Optionally filter by governance flags (monitor, notification rules). Returns key fields including trigger URL, state, and run failure rate. Data is from the stored snapshot — not live from the Power Automate API.
| Name | Required | Description | Default |
|---|---|---|---|
| monitor | No | If set, only return flows where monitor equals this value. | |
| rule_notify_onfail | No | If set, filter flows by whether on-fail notifications are enabled. | |
| rule_notify_onmissingdays | No | If set, filter flows by whether missing-days notifications are enabled. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates key behavioral traits: the Pro+ plan requirement, that data comes from a cache/snapshot (not live), the governance filtering options, and what fields are returned. It doesn't mention rate limits, pagination, or authentication details, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences with zero waste: first states requirements and core function, second explains filtering and returns, third clarifies data source. Every sentence earns its place by providing essential information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only listing tool with 3 optional parameters and 100% schema coverage but no output schema, the description provides good context about data source, filtering capabilities, and returned fields. It could benefit from mentioning response format or pagination given no output schema exists, but covers most essential aspects well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all three parameters. The description mentions 'Optionally filter by governance flags (monitor, notification rules)' which aligns with the schema but doesn't add meaningful semantic context beyond what the parameter descriptions already provide. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List Power Automate flows from the Power Clarity cache'), distinguishes it from siblings by specifying it's from the stored snapshot (not live API), and identifies the resource (flows). It explicitly differentiates from live API tools like 'list_live_flows'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('from the stored snapshot — not live from the Power Automate API') and mentions optional filtering capabilities. However, it doesn't explicitly state when NOT to use it or name specific alternative tools for different use cases beyond the live vs. cache distinction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_store_makersBInspect
[Requires Pro+ plan] List all makers (citizen developers / AAD users) from the Power Clarity cache.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds context about the plan requirement and clarifies the source ('from the Power Clarity cache'), which is useful. However, it doesn't describe return format, pagination, or other behavioral traits like rate limits or authentication needs beyond the plan hint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It front-loads the plan requirement and clearly states the action and resource, making it appropriately sized and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters, no annotations, and no output schema, the description is minimal but covers the purpose and a key constraint (plan requirement). However, for a list operation, it lacks details on output format or behavior, making it adequate but with clear gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the inputs. The description doesn't need to add parameter information, and it appropriately doesn't mention any. A baseline of 4 is applied as it compensates adequately for the lack of parameters by focusing on other aspects.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'all makers (citizen developers / AAD users) from the Power Clarity cache', making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_store_maker' or other list_* tools, which would require a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a plan requirement '[Requires Pro+ plan]', which provides some context for when to use it, but offers no guidance on when to choose this tool over alternatives like 'get_store_maker' or other list_* tools. It lacks explicit when/when-not instructions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_store_power_appsBInspect
[Requires Pro+ plan] List all Power Apps canvas apps from the Power Clarity cache.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions a plan requirement but lacks details on rate limits, pagination, caching behavior, or what 'from the Power Clarity cache' entails operationally. This leaves significant gaps for an agent to understand how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads key information (plan requirement and action). It avoids unnecessary words, though it could be slightly more structured by separating the prerequisite from the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose and a prerequisite but lacks details on behavioral traits like response format or caching implications, which are important for a list operation without structured output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the inputs. The description doesn't need to add parameter information, and it appropriately avoids redundancy. A baseline of 4 is applied since no parameters exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('all Power Apps canvas apps from the Power Clarity cache'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'list_store_flows' or 'list_store_makers', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a prerequisite ('[Requires Pro+ plan]') but provides no guidance on when to use this tool versus alternatives like 'list_live_flows' or 'list_store_flows'. There's no mention of context, exclusions, or comparisons to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resubmit_live_flow_runAInspect
Resubmit a failed or cancelled Power Automate flow run via the live PA API, re-using the original trigger payload. Discovers the trigger name from the flow definition automatically — no trigger name parameter needed.
| Name | Required | Description | Default |
|---|---|---|---|
| runName | Yes | Run identifier (name field from get_live_flow_runs). | |
| flowName | Yes | Name (ID) of the flow. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It usefully describes that it 're-uses the original trigger payload' and 'discovers the trigger name automatically,' which are important behavioral traits. However, it doesn't mention potential side effects, error conditions, authentication requirements, or rate limits that would be important for a mutation operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise at two sentences with zero wasted words. The first sentence establishes the core purpose and method, while the second sentence provides crucial additional context about automatic trigger discovery. Every element earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description provides adequate but incomplete context. It clearly explains what the tool does and its automatic trigger discovery feature, but doesn't describe the response format, error conditions, or potential side effects. Given the complexity of resubmitting flow runs, more behavioral context would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all three required parameters. The description adds value by explaining that 'no trigger name parameter needed' due to automatic discovery, which provides important context about what's NOT required. However, it doesn't add significant semantic meaning beyond what the schema already provides for the existing parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('resubmit'), target resource ('failed or cancelled Power Automate flow run'), and method ('via the live PA API, re-using the original trigger payload'). It distinguishes itself from siblings like 'trigger_live_flow' by specifying it's for re-running existing runs rather than initiating new ones.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('resubmit a failed or cancelled flow run') and mentions automatic trigger discovery as a key feature. However, it doesn't provide explicit guidance on when NOT to use it or name specific alternative tools for different scenarios, though the context implies it's for re-running rather than initial triggering.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_live_flow_stateAInspect
Start or stop a Power Automate flow via the live Power Automate API using an impersonated service account. Does not require a Power Clarity workspace — works for any flow the impersonated account can access. Reads the current flow state first and only issues the start/stop call if a state change is actually needed. Returns the flow name, environment, requested state, and the actual state reported by the PA API after the operation.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | Desired state for the flow. | |
| flowName | Yes | Name (ID) of the flow. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and adds valuable behavioral context beyond the input schema. It discloses that the tool 'Reads the current flow state first and only issues the start/stop call if a state change is actually needed,' which is a key behavioral trait not inferable from parameters. It also mentions authentication ('impersonated service account') and the return format, though it lacks details on rate limits or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with the first sentence stating the core purpose and key differentiators. Every sentence adds value: the second explains authentication and scope, the third describes the idempotent behavior, and the fourth specifies the return data. There is no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a state mutation tool with no annotations and no output schema, the description does well by covering purpose, authentication, behavioral logic, and return values. However, it lacks explicit mention of potential side effects, error conditions, or prerequisites (e.g., required permissions for the impersonated account), leaving some gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents all three parameters with descriptions and an enum for 'state.' The description does not add any additional meaning or syntax details for the parameters beyond what the schema provides, such as format examples for 'flowName' or 'environmentName.' Thus, it meets the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Start or stop a Power Automate flow'), identifies the resource ('via the live Power Automate API'), and distinguishes it from sibling tools by specifying it works 'using an impersonated service account' and 'Does not require a Power Clarity workspace'—unlike tools like 'set_store_flow_state' which likely involve workspace contexts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'using an impersonated service account' and 'works for any flow the impersonated account can access,' which implies it's for live API operations without workspace dependencies. However, it does not explicitly state when not to use it or name specific alternatives, such as 'set_store_flow_state' for workspace-based flows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_store_flow_stateAInspect
[Requires Pro+ plan] Start or stop a Power Automate flow via the live Power Automate API, then persists the updated state back to the Power Clarity store. Uses impersonation via a cached service account that is either a flow owner or an environment admin. Returns the updated stored flow record.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | Desired state for the flow. | |
| flowName | Yes | Name (ID) of the flow. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it requires a Pro+ plan, uses impersonation with specific permissions, interacts with both live API and store, and returns the updated stored flow record. This covers authentication needs, data persistence, and output behavior, though it lacks details on rate limits or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core action and key requirements, followed by implementation details and return value. Every sentence earns its place by adding critical information (plan requirement, API usage, persistence, impersonation, return type) without redundancy, making it highly efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a mutation tool with no annotations and no output schema, the description does a strong job: it explains the action, prerequisites, authentication, dual-system interaction, and return value. However, it could be more complete by detailing error cases or side effects, slightly limiting its comprehensiveness for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with clear descriptions for all three parameters. The description does not add any additional meaning beyond the schema (e.g., it doesn't explain parameter interactions or constraints), so it meets the baseline of 3 where the schema does the heavy lifting without extra value from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Start or stop a Power Automate flow'), the mechanism ('via the live Power Automate API'), and the secondary effect ('persists the updated state back to the Power Clarity store'). It distinguishes itself from siblings like 'set_live_flow_state' by explicitly mentioning the persistence to the store, making the purpose unambiguous and specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: it requires a 'Pro+ plan' and uses 'impersonation via a cached service account that is either a flow owner or an environment admin.' However, it does not explicitly state when not to use it or name alternatives (e.g., 'set_live_flow_state' for live-only updates), leaving some room for improvement in sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trigger_live_flowAInspect
Trigger an HTTP-triggered Power Automate flow by calling its live callback URL. Fetches the current signed trigger URL via the PA API (listCallbackUrl) then POSTs the provided body to it. If the flow trigger requires Azure Active Directory authentication, the impersonated Bearer token is automatically included — no extra configuration needed. Returns the HTTP status, response body, requiresAadAuth flag, and authType. Only works for flows with a Request (HTTP) trigger type.
| Name | Required | Description | Default |
|---|---|---|---|
| body | No | JSON body to POST to the trigger URL. Omit for flows that expect an empty body. | |
| flowName | Yes | Name (ID) of the flow. | |
| environmentName | Yes | Name of the Power Platform environment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it describes the two-step process (fetch URL then POST), automatic authentication handling for Azure AD, return values (HTTP status, response body, flags), and the specific flow type requirement. It doesn't mention rate limits, error handling, or performance characteristics, but covers the essential operational behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with four sentences that each add value: purpose statement, implementation details, authentication handling, and constraints. It's front-loaded with the core functionality and wastes no words on redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters, 100% schema coverage, but no output schema or annotations, the description does well by explaining the return values and behavioral context. It could be more complete by explicitly mentioning error cases or providing examples, but covers the essential operational context given the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds some context about the 'body' parameter ('JSON body to POST to the trigger URL') and implies the flowName/environmentName identify the target, but doesn't provide significant additional semantic meaning beyond what's already in the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Trigger an HTTP-triggered Power Automate flow by calling its live callback URL') and distinguishes it from siblings by specifying it only works for flows with a Request (HTTP) trigger type. It identifies both the verb ('trigger') and resource ('HTTP-triggered Power Automate flow').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('Only works for flows with a Request (HTTP) trigger type') and mentions authentication handling. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the many sibling tools, though the trigger type restriction implies alternatives exist for other flow types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_live_flowAInspect
Update or create a Power Automate flow via the live PA API. If flowName is omitted or blank, a new flow is created (PUT with a generated GUID) using an environment admin account — definition and displayName are required in that case. If flowName is provided, the existing flow is PATCHed: displayName and/or definition are updated. Mirrors displayName changes into the Power Clarity cache (gFlows). To modify a definition: call get_live_flow, mutate properties.definition, pass it here.
| Name | Required | Description | Default |
|---|---|---|---|
| flowName | No | Name (ID) of the flow to update. Omit or leave blank to create a new flow. | |
| definition | No | Full flow definition object (triggers + actions + parameters + outputs). A JSON string is also accepted and will be parsed into an object. Required when creating. For updates, obtain from get_live_flow (properties.definition), modify it, then pass the modified version here. | |
| description | No | Brief description of what is being changed or created. Required. Will be appended with "Updated via Flow Studio MCP #flowstudio-mcp" and stored as the flow description. | |
| displayName | No | Display name for the flow. Required when creating. | |
| environmentName | Yes | Name of the Power Platform environment. | |
| connectionReferences | No | Connection references map — keyed by logical name (e.g. shared_sharepointonline), each value is { connectionName, id }. Goes into properties.connectionReferences on the PA API call. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the dual create/update logic based on flowName presence, the use of PUT vs PATCH HTTP methods, automatic GUID generation for new flows, environment admin account requirement for creation, and cache mirroring to Power Clarity. It also explains the mutation workflow for definitions. The main gap is lack of information about error handling, rate limits, or authentication requirements beyond the admin account mention.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured and front-loaded with the core purpose. Each sentence adds distinct value: first states the overall purpose, second explains the create/update logic, third describes cache mirroring, and fourth provides the definition modification workflow. There's no wasted text or redundancy, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex mutation tool with 6 parameters (including nested objects) and no annotations or output schema, the description provides substantial context. It covers the dual create/update behavior, HTTP method differences, admin requirements, cache implications, and the definition modification workflow. The main gap is the lack of information about return values or error responses, which would be helpful given there's no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds some value by explaining the conditional logic around flowName (create vs update) and the relationship between definition parameter and get_live_flow, but doesn't provide significant additional parameter semantics beyond what's already well-documented in the schema descriptions. The schema already thoroughly describes each parameter's purpose and requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Update or create a Power Automate flow via the live PA API.' It specifies the exact action (update/create), resource (Power Automate flow), and API context (live PA API). This distinguishes it from sibling tools like 'update_store_flow' (which presumably works with store flows) and 'add_live_flow_to_solution' (which adds to a solution rather than updating/creating directly).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives. It states: 'If flowName is omitted or blank, a new flow is created... If flowName is provided, the existing flow is PATCHed.' It also directs users to 'call get_live_flow, mutate properties.definition, pass it here' for definition modifications, clearly indicating the workflow and prerequisite tool usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_store_flowAInspect
[Requires Pro+ plan] Update governance/metadata fields on a flow record in the Power Clarity store (description, business impact, owner team, tags, monitor flag, notification settings, etc.). Only fields provided are updated (merge semantics). Writes to the cache — does not call the Power Automate API.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | ||
| tier | No | ||
| monitor | No | Enable/disable monitoring for this flow. | |
| critical | No | ||
| flowName | Yes | Name (ID) of the flow. | |
| security | No | ||
| ownerTeam | No | ||
| description | No | ||
| supportEmail | No | ||
| supportGroup | No | ||
| businessValue | No | ||
| businessImpact | No | ||
| environmentName | Yes | Name of the Power Platform environment. | |
| ownerBusinessUnit | No | ||
| rule_notify_email | No | Comma-separated email addresses for notifications. | |
| rule_notify_onfail | No | Send notification when the flow fails. | |
| businessJustification | No | ||
| rule_notify_onmissingdays | No | Send notification when flow has not run for this many days (0 = disabled). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it requires a Pro+ plan (access control), uses merge semantics (partial updates), writes to cache rather than calling the Power Automate API (performance/implication), and updates governance/metadata fields. It doesn't cover rate limits, error handling, or response format, but provides substantial operational context beyond basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: first states requirements and core function with examples, second explains merge semantics, third clarifies cache vs. API behavior. Every sentence adds critical information without redundancy, making it front-loaded and zero-waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 18 parameters, no annotations, and no output schema, the description is moderately complete. It covers purpose, usage constraints, and key behavior, but lacks details on error responses, side effects, or full parameter explanations. Given the complexity and sparse structured data, it should do more to guide safe invocation, though it meets minimum viable standards.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is low at 33%, with only 6 of 18 parameters having descriptions. The description adds value by listing example fields (description, business impact, owner team, tags, monitor flag, notification settings) that map to some parameters, but doesn't fully compensate for the coverage gap—many parameters like 'tier', 'security', or 'ownerBusinessUnit' remain unexplained. Baseline is 3 as it provides some semantic context beyond the sparse schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Update') and resource ('governance/metadata fields on a flow record in the Power Clarity store'), with specific examples like description, business impact, owner team, tags, monitor flag, and notification settings. It distinguishes itself from siblings like 'update_live_flow' by specifying it updates store metadata rather than live flows, and from 'set_store_flow_state' by focusing on governance fields rather than state changes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit context for when to use it ('[Requires Pro+ plan]' and 'Only fields provided are updated (merge semantics)'), and implicitly distinguishes it from alternatives by noting it 'does not call the Power Automate API,' suggesting it's for cache updates rather than direct API modifications. However, it doesn't explicitly name when-not-to-use scenarios or direct alternatives like 'update_live_flow' for live flow updates.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.