Skip to main content
Glama

export_report

Generate a standalone HTML report with all UI audit findings. Runs screenshot, accessibility, performance, and code analysis, outputting a shareable file with zero external dependencies.

Instructions

Generate a standalone HTML report file with all audit findings embedded. Runs the full review pipeline (screenshot, accessibility, performance, code analysis) and outputs a beautiful, shareable HTML file with zero external dependencies.

Use this when the user wants a downloadable/shareable report of their UI review.

This tool is FREE — runs entirely within Claude Code.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesURL of the running application (e.g., http://localhost:3000)
codeDirectoryYesAbsolute path to the frontend source directory (e.g., /Users/me/project/src)
outputPathNoOutput file path for the HTML report (defaults to ./uimax-report.html)

Implementation Reference

  • src/server.ts:295-355 (registration)
    The MCP tool registration for 'export_report'. Defines the schema (url, codeDirectory, outputPath) and the handler callback that orchestrates the full review and HTML report generation.
      server.tool(
        "export_report",
        `Generate a standalone HTML report file with all audit findings embedded. Runs the full review pipeline (screenshot, accessibility, performance, code analysis) and outputs a beautiful, shareable HTML file with zero external dependencies.
    
    Use this when the user wants a downloadable/shareable report of their UI review.
    
    This tool is FREE — runs entirely within Claude Code.`,
        {
          url: z.string().url().describe("URL of the running application (e.g., http://localhost:3000)"),
          codeDirectory: z.string().describe("Absolute path to the frontend source directory (e.g., /Users/me/project/src)"),
          outputPath: z.string().optional().describe("Output file path for the HTML report (defaults to ./uimax-report.html)"),
        },
        async ({ url, codeDirectory, outputPath }) => {
          try {
            const resolvedPath = resolve(outputPath ?? "./uimax-report.html");
    
            // Run the full audit pipeline
            const reviewData = await runFullReview(url, codeDirectory);
    
            // Generate the self-contained HTML report
            const html = generateHtmlReport(reviewData);
    
            // Write to disk
            await writeFile(resolvedPath, html, "utf-8");
    
            const violationCount = reviewData.accessibility.violations.length;
            const findingCount = reviewData.codeAnalysis.findings.length;
            const totalIssues = violationCount + findingCount;
    
            return {
              content: [
                {
                  type: "text" as const,
                  text: [
                    `# UIMax Report Exported`,
                    ``,
                    `**File:** ${resolvedPath}`,
                    `**URL:** ${url}`,
                    `**Timestamp:** ${reviewData.timestamp}`,
                    ``,
                    `## Summary`,
                    `- Accessibility violations: ${violationCount}`,
                    `- Code findings: ${findingCount}`,
                    `- Total issues: ${totalIssues}`,
                    `- Load time: ${reviewData.performance.loadTime.toFixed(0)}ms`,
                    `- Files analyzed: ${reviewData.codeAnalysis.totalFiles}`,
                    ``,
                    `The HTML report is self-contained with all CSS inline and the screenshot embedded as base64. Open it in any browser to view or share.`,
                  ].join("\n"),
                },
              ],
            };
          } catch (error) {
            const message = error instanceof Error ? error.message : String(error);
            return {
              content: [{ type: "text" as const, text: `Report export failed: ${message}` }],
              isError: true,
            };
          }
        }
      );
  • The handler logic for export_report. It runs the full review pipeline via runFullReview, generates a self-contained HTML report via generateHtmlReport, and writes it to disk.
    async ({ url, codeDirectory, outputPath }) => {
      try {
        const resolvedPath = resolve(outputPath ?? "./uimax-report.html");
    
        // Run the full audit pipeline
        const reviewData = await runFullReview(url, codeDirectory);
    
        // Generate the self-contained HTML report
        const html = generateHtmlReport(reviewData);
    
        // Write to disk
        await writeFile(resolvedPath, html, "utf-8");
    
        const violationCount = reviewData.accessibility.violations.length;
        const findingCount = reviewData.codeAnalysis.findings.length;
        const totalIssues = violationCount + findingCount;
    
        return {
          content: [
            {
              type: "text" as const,
              text: [
                `# UIMax Report Exported`,
                ``,
                `**File:** ${resolvedPath}`,
                `**URL:** ${url}`,
                `**Timestamp:** ${reviewData.timestamp}`,
                ``,
                `## Summary`,
                `- Accessibility violations: ${violationCount}`,
                `- Code findings: ${findingCount}`,
                `- Total issues: ${totalIssues}`,
                `- Load time: ${reviewData.performance.loadTime.toFixed(0)}ms`,
                `- Files analyzed: ${reviewData.codeAnalysis.totalFiles}`,
                ``,
                `The HTML report is self-contained with all CSS inline and the screenshot embedded as base64. Open it in any browser to view or share.`,
              ].join("\n"),
            },
          ],
        };
      } catch (error) {
        const message = error instanceof Error ? error.message : String(error);
        return {
          content: [{ type: "text" as const, text: `Report export failed: ${message}` }],
          isError: true,
        };
      }
    }
  • The generateHtmlReport function in html-report.ts builds the complete standalone HTML report with inline CSS, embedded base64 screenshot, and all audit sections (grades, summary cards, accessibility, performance, SEO, code analysis).
    export function generateHtmlReport(data: FullReviewResult): string {
      const title = `UIMax Report — ${data.url}`;
    
      return `<!DOCTYPE html>
    <html lang="en">
    <head>
      <meta charset="UTF-8">
      <meta name="viewport" content="width=device-width, initial-scale=1.0">
      <title>${escapeHtml(title)}</title>
      <style>${generateStyles()}</style>
    </head>
    <body>
      <div class="container">
        ${buildHeader(data)}
        ${buildGradeCards(data)}
        ${buildSummaryCards(data)}
        ${buildScreenshotSection(data)}
        ${buildAccessibilitySection(data)}
        ${buildPerformanceSection(data.performance)}
        ${buildSeoSection(data)}
        ${buildCodeAnalysisSection(data)}
        ${buildSeveritySummary(data)}
        ${buildFooter()}
      </div>
    </body>
    </html>`;
    }
  • The runFullReview function orchestrates the full audit pipeline: screenshot, accessibility, performance, code analysis, Lighthouse (optional), and SEO (optional). Returns a FullReviewResult used by the export_report handler.
    export async function runFullReview(
      url: string,
      codeDirectory: string,
      viewport?: { width: number; height: number }
    ): Promise<FullReviewResult> {
      const width = viewport?.width ?? 1440;
      const height = viewport?.height ?? 900;
    
      // Run screenshot first (needed for visual review)
      const screenshot = await captureScreenshot({
        url,
        width,
        height,
        fullPage: true,
        delay: 1500,
        deviceScaleFactor: 2,
      });
    
      // Run remaining audits concurrently (Lighthouse + SEO are optional/safe)
      const [accessibility, performance, codeAnalysis, lighthouseOutcome, seoResult] =
        await Promise.all([
          runAccessibilityAudit(url),
          measurePerformance(url),
          analyzeCode(codeDirectory),
          runLighthouseSafe(url),
          runSeoSafe(url),
        ]);
    
      // Count code findings by severity
      const codeFindings = countCodeFindingsBySeverity(codeAnalysis.findings);
    
      // Compute letter grades for each section
      const grades = computeSectionGrades({
        lighthouseScores: lighthouseOutcome
          ? {
              performance: lighthouseOutcome.scores.performance,
              accessibility: lighthouseOutcome.scores.accessibility,
              bestPractices: lighthouseOutcome.scores.bestPractices,
              seo: lighthouseOutcome.scores.seo,
            }
          : null,
        accessibilityViolations: accessibility.violations.length,
        accessibilityPasses: accessibility.passes,
        performanceMetrics: {
          fcp: performance.firstContentfulPaint,
          lcp: performance.largestContentfulPaint,
          cls: performance.cumulativeLayoutShift,
          tbt: performance.totalBlockingTime,
        },
        codeFindings,
        totalFiles: codeAnalysis.totalFiles,
      });
    
      // If we have a dedicated SEO score, update the SEO grade
      const seoGrade = seoResult
        ? { ...grades.seo, score: seoResult.score }
        : grades.seo;
    
      const finalGrades = { ...grades, seo: seoGrade };
    
      return {
        url,
        codeDirectory,
        timestamp: new Date().toISOString(),
        screenshot,
        accessibility,
        performance,
        codeAnalysis,
        lighthouse: lighthouseOutcome ?? undefined,
        seo: seoResult ?? undefined,
        grades: finalGrades,
      };
    }
  • Zod schema for the export_report tool: requires url and codeDirectory strings, with an optional outputPath for the generated HTML file.
    {
      url: z.string().url().describe("URL of the running application (e.g., http://localhost:3000)"),
      codeDirectory: z.string().describe("Absolute path to the frontend source directory (e.g., /Users/me/project/src)"),
      outputPath: z.string().optional().describe("Output file path for the HTML report (defaults to ./uimax-report.html)"),
    },
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses that the tool runs a full review pipeline and is 'FREE' within Claude Code, but does not detail side effects (e.g., runtime, file overwrite, state changes). Adequate but has gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each serving a clear purpose: main action, usage guideline, and a free note. No fluff, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given three parameters (all described), no output schema, and no annotations, the description covers the tool's purpose, output, and when to use. Missing details on prerequisites (e.g., app must be running) and error handling, but sufficient for a file-generation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all three parameters. The description adds minimal extra meaning (e.g., outputPath default). With high schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool generates a standalone HTML report with all audit findings embedded. It specifies the full pipeline (screenshot, accessibility, performance, code analysis) and distinguishes from siblings by focusing on a shareable file output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'when the user wants a downloadable/shareable report of their UI review.' Does not specify when not to use or mention alternatives, but the context of siblings implies alternatives for individual audits.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/prembobby39-gif/uimax-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server