Skip to main content
Glama

generate_html_report

Renders last test results into a self-contained HTML report with embedded screenshots, step lists, and history sparkline, collapsing passed sections and expanding failed cards. No external dependencies needed.

Instructions

把最近一次 run_tests 的結果渲染成單檔自包含 HTML——base64 內嵌截圖、嵌入式 step list、history sparkline 走勢、折疊的 Passed 區塊、展開的 Failed cards。沒外部 CSS/JS 依賴,可以直接寄信、丟靜態 host、貼到 Slack。預設輸出 PROJECT_ROOT/report.html。實作位於 reporters/html.py,走 sample_report.html 同款設計。

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
outputNo選填,輸出檔名(相對於 QA_PROJECT_ROOT)。預設 `report.html`。report.html

Implementation Reference

  • The tool 'generate_html_report' is registered in the list_tools() function with its name, description, and input schema.
    Tool(
        name="generate_html_report",
        description=(
            "把最近一次 run_tests 的結果渲染成單檔自包含 HTML——base64 內嵌截圖、"
            "嵌入式 step list、history sparkline 走勢、折疊的 Passed 區塊、展開的 Failed cards。"
            "沒外部 CSS/JS 依賴,可以直接寄信、丟靜態 host、貼到 Slack。"
            "預設輸出 PROJECT_ROOT/report.html。實作位於 reporters/html.py,"
            "走 sample_report.html 同款設計。"
        ),
        inputSchema={
            "type": "object",
            "properties": {
                "output": {
                    "type": "string",
                    "default": "report.html",
                    "description": (
                        "選填,輸出檔名(相對於 QA_PROJECT_ROOT)。"
                        "預設 `report.html`。"
                    ),
                },
            },
        },
    ),
  • The handler function that dispatches the 'generate_html_report' tool call to html_reporter.write_report().
    if name == "generate_html_report":
        target = html_reporter.write_report(args.get("output", "report.html"))
        return [TextContent(type="text", text=f"已產生 HTML 報告:{target}")]
  • The write_report() helper function that renders the HTML and writes it to disk.
    def write_report(output: str = "report.html") -> Path:
        """Render and write the HTML report to disk under PROJECT_ROOT."""
        target = PROJECT_ROOT / output
        target.parent.mkdir(parents=True, exist_ok=True)
        target.write_text(render_report(), encoding="utf-8")
        return target
  • The render_report() function that builds the full self-contained HTML report by gathering test data from the runner and applying template replacements.
    def render_report() -> str:
        """Render the latest test report as a self-contained HTML string."""
        runner = get_runner()
        summary = runner.get_report_summary()
        # Prefer the rich details (with steps) when available; fall back to the
        # legacy failure-only path so non-pytest runners still render.
        all_details = runner.get_all_test_details() if hasattr(runner, "get_all_test_details") else []
        if all_details:
            failures = [t for t in all_details if t.get("outcome") == "failed"]
            passes = [t for t in all_details if t.get("outcome") == "passed"]
        else:
            failures = runner.get_failure_details() or []
            passes = []
    
        has_error = isinstance(summary, dict) and "error" in summary
        total = int(summary.get("total", 0) or 0) if not has_error else 0
        passed = int(summary.get("passed", 0) or 0) if not has_error else 0
        failed = int(summary.get("failed", 0) or 0) if not has_error else 0
        skipped = int(summary.get("skipped", 0) or 0) if not has_error else 0
        flaky_in_run = int(summary.get("flaky_in_run", 0) or 0) if not has_error else 0
        duration = summary.get("duration") if not has_error else None
    
        pass_rate = (passed / total * 100) if total else 0
        duration_str = f"{duration:.2f}s" if isinstance(duration, (int, float)) else "—"
        timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
    
        if has_error:
            failure_html = f'<div class="empty">{escape(str(summary.get("error", "找不到報告")))}</div>'
        elif failed == 0 and total > 0:
            failure_html = '<div class="empty success">所有測試通過</div>'
        elif failed == 0 and total == 0:
            failure_html = '<div class="empty">尚未執行任何測試</div>'
        else:
            cards = []
            for i, f in enumerate(failures or []):
                if isinstance(f, dict) and "error" in f:
                    continue
                nodeid = escape(str(f.get("nodeid", "unknown")))
                message = escape(str(f.get("message", "")))
                dur = f.get("duration")
                dur_str = f"{dur:.3f}s" if isinstance(dur, (int, float)) else ""
                open_attr = "open" if i < 3 else ""
                shot_html = _embed_screenshot(f.get("screenshot") if isinstance(f, dict) else None)
                links_html = _artifact_links(f if isinstance(f, dict) else {})
                steps_html = _render_steps_html(f.get("steps") or []) if isinstance(f, dict) else ""
                meta_html = _render_test_meta(
                    f.get("title") if isinstance(f, dict) else None,
                    f.get("nodeid", "unknown") if isinstance(f, dict) else "unknown",
                )
                cards.append(
                    f'<details class="failure" {open_attr}>'
                    f'<summary>'
                    f'<span class="fail-badge">FAIL</span>'
                    f'{meta_html}'
                    f'<span class="fail-dur">{dur_str}</span>'
                    f'</summary>'
                    f'<pre>{message}</pre>'
                    f'{steps_html}'
                    f'{shot_html}'
                    f'{links_html}'
                    f'</details>'
                )
            failure_html = "\n".join(cards) if cards else '<div class="empty">無失敗詳情</div>'
    
        history = runner.get_history(limit=10) if hasattr(runner, "get_history") else []
        trend_html = _render_trend(history, failed)
        passed_html = _render_pass_section(passes)
    
        flaky_stat_html = (
            f'<div class="stat flaky"><div class="label">Flaky (retried)</div>'
            f'<div class="value">{flaky_in_run}</div></div>'
            if flaky_in_run > 0 else ""
        )
    
        out = TEMPLATE
        replacements = {
            "{{RUNNER_NAME}}": escape(runner.name),
            "{{TIMESTAMP}}": timestamp,
            "{{PROJECT_ROOT}}": escape(str(PROJECT_ROOT)),
            "{{TOTAL}}": str(total),
            "{{PASSED}}": str(passed),
            "{{FAILED}}": str(failed),
            "{{SKIPPED}}": str(skipped),
            "{{FLAKY_STAT}}": flaky_stat_html,
            "{{DURATION}}": duration_str,
            "{{PASS_RATE}}": f"{pass_rate:.1f}",
            "{{PASS_RATE_INT}}": str(int(pass_rate)),
            "{{TREND_SECTION}}": trend_html,
            "{{FAILURE_SECTION}}": failure_html,
            "{{PASSED_SECTION}}": passed_html,
            "{{YEAR}}": str(datetime.now().year),
        }
        for k, v in replacements.items():
            out = out.replace(k, v)
        return out
  • The input schema for the generate_html_report tool, accepting an optional 'output' string parameter.
    inputSchema={
        "type": "object",
        "properties": {
            "output": {
                "type": "string",
                "default": "report.html",
                "description": (
                    "選填,輸出檔名(相對於 QA_PROJECT_ROOT)。"
                    "預設 `report.html`。"
                ),
            },
        },
    },
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description discloses key behaviors: it generates a self-contained HTML with embedded screenshots, no external dependencies, and a default output path. It does not mention potential size issues or permissions needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph but packs information efficiently with bullet-like dashes. Slightly more structure could improve readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema and no output schema, the description covers functionality, output content, and usage context. Missing explicit prerequisites or error conditions, but adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description adds little beyond the schema's own parameter description (default output path). Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool renders the latest run_tests results into a self-contained HTML file with specific features (base64 screenshots, step list, sparkline, etc.), distinguishing it from siblings like get_test_report.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates it should be used after run_tests and for sharing via email/Slack, but does not explicitly exclude other use cases or compare with alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kao273183/mk-qa-master'

If you have feedback or need assistance with the MCP directory API, please join our Discord server