Skip to main content
Glama
Hive-Academy

π“‚€π“’π“‹Ήπ”Έβ„•π•Œπ”Ήπ•€π•Šπ“‹Ήπ“’π“‚€ - Intelligent Guidance for

by Hive-Academy

generate_workflow_report

Generate interactive workflow reports and analytics dashboards with real-time data tracking. Visualize task details, delegation flows, implementation plans, and role performance metrics for optimized workflow management.

Instructions

Generates interactive workflow reports and analytics dashboards with rich visualizations and real-time data tracking.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
basePathNoBase directory for report generation (defaults to PROJECT_ROOT environment variable or current working directory). **IMPORTANT**: When using NPX package, always provide the project root path to ensure reports are generated in the correct location.
endDateNoEnd date for the report period (ISO 8601 format)
modeNoFilter tasks by current mode
outputFormatNoOutput format for the report: β€’ html - Interactive HTML dashboard with charts and Alpine.js interactivity (RECOMMENDED) β€’ json - Raw JSON data for custom processing **NOTE:** PDF, PNG, JPEG formats have been removed to eliminate Playwright dependencies and improve performance.html
ownerNoFilter tasks by owner
priorityNoFilter tasks by priority (Low, Medium, High, Critical)
reportTypeYesType of report to generate. Available report types: **MAIN DASHBOARD REPORTS:** β€’ interactive-dashboard - Interactive HTML dashboard with charts, filtering, and analytics (RECOMMENDED) β€’ dashboard - Alias for interactive-dashboard β€’ summary - Clean summary view with key metrics and task list **SPECIALIZED REPORTS:** β€’ task-detail - Comprehensive individual task report with codebase analysis, implementation plans, and subtasks β€’ delegation-flow - Workflow transitions and delegation patterns for a specific task β€’ implementation-plan - Detailed implementation plans with subtask breakdowns and progress tracking β€’ workflow-analytics - Cross-task analytics and insights with role performance metrics β€’ role-performance - Individual role performance analysis with efficiency metrics **USAGE EXAMPLES:** - Daily standup: "interactive-dashboard" or "summary" - Sprint retrospective: "workflow-analytics" - Individual task analysis: "task-detail" with taskId - Workflow optimization: "delegation-flow" with taskId - Implementation tracking: "implementation-plan" with taskId - Role assessment: "role-performance" with owner filter - Team analytics: "workflow-analytics" with date filters
startDateNoStart date for the report period (ISO 8601 format)
taskIdNoTask ID for individual task reports
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'generates' (implying a write/creation operation) and 'real-time data tracking' (suggesting dynamic updates), but lacks details on permissions, rate limits, side effects, or output behavior. The input schema notes performance improvements (e.g., removed PDF formats), but the description doesn't expand on this, leaving gaps in transparency for a complex tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose. It avoids redundancy and wastes no words, making it easy to grasp quickly. However, it could be slightly more structured by breaking down key features or use cases, but overall, it's appropriately concise for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, no output schema, no annotations), the description is somewhat incomplete. It states what the tool does but lacks details on output format, error handling, or integration with siblings. The input schema provides extensive parameter info, but without annotations or output schema, the description should do more to cover behavioral aspects, such as what 'generates' entails in practice.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 9 parameters thoroughly (e.g., 'reportType' with detailed examples). The description adds no additional parameter semantics beyond what's in the schema, such as explaining interactions between parameters. This meets the baseline of 3, as the schema does the heavy lifting, but the description doesn't compensate with extra insights.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Generates interactive workflow reports and analytics dashboards with rich visualizations and real-time data tracking.' It specifies the verb ('generates') and resource ('workflow reports and analytics dashboards') with additional details about features. However, it doesn't explicitly differentiate from sibling tools like 'get_report_status' or 'workflow_execution_operations', which could provide related functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage through terms like 'interactive' and 'real-time data tracking,' suggesting it's for analytical purposes. The input schema includes extensive examples for different report types (e.g., 'Daily standup: "interactive-dashboard"'), which offer context. However, there's no explicit guidance on when to use this tool versus alternatives like 'get_report_status' or other siblings, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Hive-Academy/Anubis-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server