Skip to main content
Glama

review_ui

Analyzes web UI by capturing screenshots, running accessibility and performance audits, and providing specific code fixes for frontend improvements.

Instructions

THE PRIMARY TOOL — Fully automated UI review pipeline. Captures a screenshot, runs accessibility/performance/code audits, then returns ALL data along with an expert frontend review methodology so you can generate a comprehensive review and implement fixes.

Use this when the user asks to "review my UI", "audit my frontend", or "find UI issues". After receiving the results, you MUST:

  1. Study the screenshot carefully for visual/UX issues

  2. Analyze the audit data following the expert methodology provided

  3. Generate a comprehensive review with SPECIFIC fixes (exact CSS values, code changes)

  4. Implement the fixes directly in the codebase

This tool is FREE — it runs entirely within Claude Code using the user's existing plan. No API keys needed.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesURL of the running application (e.g., http://localhost:3000)
codeDirectoryYesAbsolute path to the frontend source directory (e.g., /Users/me/project/src)
widthNoViewport width in pixels
heightNoViewport height in pixels

Implementation Reference

  • The `review_ui` tool is defined in `src/server.ts` using the MCP `server.tool` method. Its handler calls `runFullReview` to collect data and then returns a structured response containing screenshot, audit reports, and an expert prompt.
      server.tool(
        "review_ui",
        `THE PRIMARY TOOL — Fully automated UI review pipeline. Captures a screenshot, runs accessibility/performance/code audits, then returns ALL data along with an expert frontend review methodology so you can generate a comprehensive review and implement fixes.
    
    Use this when the user asks to "review my UI", "audit my frontend", or "find UI issues". After receiving the results, you MUST:
    1. Study the screenshot carefully for visual/UX issues
    2. Analyze the audit data following the expert methodology provided
    3. Generate a comprehensive review with SPECIFIC fixes (exact CSS values, code changes)
    4. Implement the fixes directly in the codebase
    
    This tool is FREE — it runs entirely within Claude Code using the user's existing plan. No API keys needed.`,
        {
          url: z.string().url().describe("URL of the running application (e.g., http://localhost:3000)"),
          codeDirectory: z.string().describe("Absolute path to the frontend source directory (e.g., /Users/me/project/src)"),
          width: z.number().optional().default(1440).describe("Viewport width in pixels"),
          height: z.number().optional().default(900).describe("Viewport height in pixels"),
        },
        async ({ url, codeDirectory, width, height }) => {
          try {
            // Collect all audit data
            const auditData = await runFullReview(url, codeDirectory, {
              width: width ?? 1440,
              height: height ?? 900,
            });
    
            const auditReport = formatFullReviewReport(auditData);
    
            // Return screenshot + data + expert prompt to Claude Code
            // Claude Code (on the user's Pro plan) generates the expert review itself
            return {
              content: [
                {
                  type: "text" as const,
                  text: [
                    `# UIMax Data Collection Complete`,
                    ``,
                    `**URL:** ${url}`,
                    `**Code Directory:** ${codeDirectory}`,
                    `**Timestamp:** ${auditData.timestamp}`,
                    `**Accessibility violations:** ${auditData.accessibility.violations.length}`,
                    `**Accessibility passes:** ${auditData.accessibility.passes}`,
                    `**Load time:** ${auditData.performance.loadTime.toFixed(0)}ms`,
                    `**DOM nodes:** ${auditData.performance.domNodes}`,
                    `**Code files analyzed:** ${auditData.codeAnalysis.totalFiles}`,
                    `**Code findings:** ${auditData.codeAnalysis.findings.length}`,
                    `**Framework detected:** ${auditData.codeAnalysis.framework}`,
                    ...(auditData.lighthouse
                      ? [
                          `**Lighthouse Performance:** ${auditData.lighthouse.scores.performance ?? "N/A"}`,
                          `**Lighthouse Accessibility:** ${auditData.lighthouse.scores.accessibility ?? "N/A"}`,
                          `**Lighthouse Best Practices:** ${auditData.lighthouse.scores.bestPractices ?? "N/A"}`,
                          `**Lighthouse SEO:** ${auditData.lighthouse.scores.seo ?? "N/A"}`,
                        ]
                      : [`**Lighthouse:** skipped (timed out or unavailable)`]),
                    ``,
                    `---`,
                    ``,
                    `## Screenshot of the live UI — study this carefully:`,
                  ].join("\n"),
                },
                {
                  type: "image" as const,
                  data: auditData.screenshot.base64,
                  mimeType: auditData.screenshot.mimeType,
                },
                {
                  type: "text" as const,
                  text: [
                    ``,
                    `---`,
                    ``,
                    auditReport,
                    ``,
                    `---`,
                    ``,
                    `# Expert Review Instructions`,
                    ``,
                    `You now have everything you need. Follow the methodology below to generate a comprehensive expert UI review, then implement every fix.`,
                    ``,
                    UI_REVIEW_PROMPT,
                    ``,
                    `---`,
                    ``,
                    `# Implementation Instructions`,
                    ``,
                    `After generating your review, IMMEDIATELY implement the fixes:`,
                    `1. Start with CRITICAL severity findings`,
                    `2. Then HIGH, MEDIUM, LOW in order`,
                    `3. For each finding, locate the exact file and apply the specific code change`,
                    `4. After implementing all fixes, provide a summary of what was changed`,
                    ``,
                    `DO NOT just list the findings — actually edit the code files and fix them.`,
                  ].join("\n"),
                },
              ],
            };
          } catch (error) {
            const message = error instanceof Error ? error.message : String(error);
            return {
              content: [{ type: "text" as const, text: `UI review failed: ${message}` }],
              isError: true,
            };
          }
        }
      );
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. It adds valuable operational context ('FREE', 'runs entirely within Claude Code', 'No API keys needed') but omits safety profile (read-only vs destructive), side effects, or specific return data structure despite the complex multi-step nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with clear prioritization ('THE PRIMARY TOOL'), but the four-step 'MUST' post-processing instructions are overly verbose and belong in agent system prompts rather than tool description, detracting from conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool (screenshot + multiple audits) with no output schema, description adequately explains the workflow and mentions 'expert frontend review methodology' return, but remains vague on specific data structures returned ('ALL data') and audit types performed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. Description mentions 'Captures a screenshot' which implicitly contextualizes width/height, but does not elaborate on required parameters (url, codeDirectory) beyond what schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity with 'Fully automated UI review pipeline' and clear verbs (captures, runs, returns). The 'THE PRIMARY TOOL' designation effectively distinguishes it from specialized siblings like accessibility_audit or screenshot.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit trigger phrases ('review my UI', 'audit my frontend', 'find UI issues') for when to select this tool. Clearly positions it as the comprehensive option versus specialized single-purpose tools in the sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/prembobby39-gif/uimax-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server