Skip to main content
Glama

export_report

Generate a shareable HTML report with embedded audit findings by running full UI review including accessibility, performance, and code analysis.

Instructions

Generate a standalone HTML report file with all audit findings embedded. Runs the full review pipeline (screenshot, accessibility, performance, code analysis) and outputs a beautiful, shareable HTML file with zero external dependencies.

Use this when the user wants a downloadable/shareable report of their UI review.

This tool is FREE — runs entirely within Claude Code.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesURL of the running application (e.g., http://localhost:3000)
codeDirectoryYesAbsolute path to the frontend source directory (e.g., /Users/me/project/src)
outputPathNoOutput file path for the HTML report (defaults to ./uimax-report.html)

Implementation Reference

  • The handler function for the export_report tool, which initiates a full review and generates/writes the HTML report.
    async ({ url, codeDirectory, outputPath }) => {
      try {
        const resolvedPath = resolve(outputPath ?? "./uimax-report.html");
    
        // Run the full audit pipeline
        const reviewData = await runFullReview(url, codeDirectory);
    
        // Generate the self-contained HTML report
        const html = generateHtmlReport(reviewData);
    
        // Write to disk
        await writeFile(resolvedPath, html, "utf-8");
    
        const violationCount = reviewData.accessibility.violations.length;
        const findingCount = reviewData.codeAnalysis.findings.length;
        const totalIssues = violationCount + findingCount;
    
        return {
          content: [
            {
              type: "text" as const,
              text: [
                `# UIMax Report Exported`,
                ``,
                `**File:** ${resolvedPath}`,
                `**URL:** ${url}`,
                `**Timestamp:** ${reviewData.timestamp}`,
                ``,
                `## Summary`,
                `- Accessibility violations: ${violationCount}`,
                `- Code findings: ${findingCount}`,
                `- Total issues: ${totalIssues}`,
  • src/server.ts:139-150 (registration)
    Tool registration for export_report with its description and input schema.
      server.tool(
        "export_report",
        `Generate a standalone HTML report file with all audit findings embedded. Runs the full review pipeline (screenshot, accessibility, performance, code analysis) and outputs a beautiful, shareable HTML file with zero external dependencies.
    
    Use this when the user wants a downloadable/shareable report of their UI review.
    
    This tool is FREE — runs entirely within Claude Code.`,
        {
          url: z.string().url().describe("URL of the running application (e.g., http://localhost:3000)"),
          codeDirectory: z.string().describe("Absolute path to the frontend source directory (e.g., /Users/me/project/src)"),
          outputPath: z.string().optional().describe("Output file path for the HTML report (defaults to ./uimax-report.html)"),
        },
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden and succeeds well: it reveals the composite pipeline nature (aggregating screenshot, accessibility, performance, code analysis), output characteristics ('zero external dependencies', 'beautiful, shareable'), and execution context ('FREE — runs entirely within Claude Code'). Could improve by mentioning overwrite behavior or execution time expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tight sentences across three paragraphs: core functionality front-loaded, specific usage context in the middle, execution constraints at the end. Zero redundancy or filler content. Every sentence provides distinct value (what, when, constraints).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage, no annotations, and no output schema, the description adequately covers the tool's purpose, composite behavior, output format, and cost/execution environment. Minor gaps remain regarding error handling, file overwrite behavior, and distinction from review_ui/quick_review siblings, but sufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema fully documents url, codeDirectory, and outputPath. The description implies input requirements through the pipeline explanation but does not add parameter-specific semantics, syntax guidance, or examples beyond what the schema already provides. Baseline 3 is appropriate given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Generate[s] a standalone HTML report file with all audit findings embedded' and lists the specific pipeline components (screenshot, accessibility, performance, code analysis). It clearly distinguishes this from siblings by emphasizing the 'standalone HTML' output and 'full review pipeline' aggregation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes an explicit 'Use this when...' clause targeting 'downloadable/shareable report' scenarios. While it doesn't explicitly name alternatives or exclusions, the 'full review pipeline' phrasing effectively positions it as the comprehensive option versus individual audit siblings like accessibility_audit or performance_audit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/prembobby39-gif/uimax-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server