Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}
prompts
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
review_uiB

THE PRIMARY TOOL — Fully automated UI review pipeline. Captures a screenshot, runs accessibility/performance/code audits, then returns ALL data along with an expert frontend review methodology so you can generate a comprehensive review and implement fixes.

Use this when the user asks to "review my UI", "audit my frontend", or "find UI issues". After receiving the results, you MUST:

  1. Study the screenshot carefully for visual/UX issues

  2. Analyze the audit data following the expert methodology provided

  3. Generate a comprehensive review with SPECIFIC fixes (exact CSS values, code changes)

  4. Implement the fixes directly in the codebase

This tool is FREE — it runs entirely within Claude Code using the user's existing plan. No API keys needed.

export_reportA

Generate a standalone HTML report file with all audit findings embedded. Runs the full review pipeline (screenshot, accessibility, performance, code analysis) and outputs a beautiful, shareable HTML file with zero external dependencies.

Use this when the user wants a downloadable/shareable report of their UI review.

This tool is FREE — runs entirely within Claude Code.

quick_reviewA

Quick design-only review. Captures a screenshot and returns it with a focused design review methodology. No code analysis, no performance audit — just visual/UX feedback. Great for rapid design iteration.

After receiving the screenshot, analyze it as a senior UI designer and provide 5-10 high-impact observations with specific fixes.

This tool is FREE — runs entirely within Claude Code.

screenshotA

Capture a screenshot of a webpage. Returns a PNG image that you can visually analyze for design issues, layout problems, and UI quality.

responsive_screenshotsA

Capture screenshots at mobile (375px), tablet (768px), and desktop (1440px) viewports. Perfect for reviewing responsive design.

check_dark_modeA

Detect whether a webpage supports dark mode. Captures two screenshots — one in light mode and one with prefers-color-scheme: dark emulated — then compares them. Returns both screenshots and a difference percentage. Great for checking if dark mode is properly implemented.

compare_screenshotsA

Before/after visual comparison with pixel-level diffing. Captures screenshots of two URLs at the same viewport size, computes an accurate pixel-level difference using pixelmatch, and returns BOTH images plus a red-highlighted diff image showing exactly which pixels changed. Use this to verify UI changes, compare staging vs production, or check before/after states of a redesign.

semantic_compareA

AI-powered visual comparison. Captures before/after screenshots and provides a structured methodology for Claude to semantically evaluate whether UI changes match the intended design request. Goes beyond pixel diffing to understand intent.

Returns both screenshots as images, a pixel-level diff image, the difference percentage, and a detailed semantic methodology prompt. Claude's vision analyzes the screenshots to determine if the changes match what was requested, checking for regressions and unintended side effects.

This tool is FREE — it runs entirely within Claude Code using the user's existing plan. No API keys needed.

accessibility_auditA

Run an automated accessibility audit using axe-core. Checks for WCAG 2.1 Level A and AA violations, reporting issues by severity with specific fix instructions.

performance_auditB

Measure Core Web Vitals and performance metrics: FCP, LCP, CLS, TBT, load time, resource count, DOM size, and JS heap usage.

lighthouse_auditA

Run a full Lighthouse audit against a URL. Returns scores for Performance, Accessibility, Best Practices, and SEO (0-100), plus detailed audit findings for render-blocking resources, image optimization, unused code, and more. Heavier than performance_audit but provides industry-standard Lighthouse scores.

seo_auditA

Run a comprehensive SEO audit. Checks 18 SEO signals including meta tags, heading hierarchy, Open Graph tags, Twitter cards, structured data (JSON-LD), canonical URLs, image alt text, and more. Returns a 0-100 score and specific recommendations for each failing check.

Use this when the user wants to check their page's SEO health, improve search engine visibility, or ensure proper social sharing metadata.

This tool is FREE — runs entirely within Claude Code.

pwa_auditA

Check Progressive Web App readiness: installable manifest, service worker, HTTPS, offline capability, and more. Runs a full Lighthouse audit under the hood and extracts all PWA-related audit results with pass/fail for each requirement.

security_auditA

Check security posture via Lighthouse: HTTPS usage, mixed content, CSP headers, vulnerable JavaScript libraries, external links without noopener, and more. Returns pass/fail findings with severity levels.

unused_codeA

Find unused JavaScript and CSS on a page. Runs Lighthouse and extracts the unused-javascript and unused-css-rules audits, showing each resource with total bytes, unused bytes, and potential savings. Great for reducing bundle size.

lcp_optimizationA

Deep Largest Contentful Paint (LCP) analysis. Identifies the LCP element, measures TTFB, resource load time, and render delay. Provides specific optimization suggestions to improve LCP below the 2.5s threshold.

resource_analysisB

Full resource breakdown of a page: total transfer size, breakdown by type (JS, CSS, images, fonts), number of requests, top 10 largest resources, and render-blocking resources. Helps identify what is making your page heavy.

analyze_codeB

Analyze frontend source code for quality issues: accessibility anti-patterns, CSS problems, component complexity, design inconsistencies, and performance concerns.

crawl_and_reviewA

Crawl multiple pages from a starting URL and run accessibility + performance audits on each. Discovers internal links from the start page, deduplicates them, and visits up to maxPages (default 5, max 10). Each page gets a screenshot, axe-core accessibility audit, and Performance API metrics. Does NOT run Lighthouse (too slow for multi-page). Use this to audit an entire site section quickly.

This tool is FREE — runs entirely within Claude Code.

save_baselineA

Save the current audit state for a URL as a baseline snapshot. Runs screenshot, accessibility, performance, and Lighthouse audits, then saves the results to .uimax-history.json in the project directory. Use this to establish a baseline before making changes, so you can compare later.

This tool is FREE — runs entirely within Claude Code.

compare_to_baselineA

Compare the current audit state of a URL against its most recent saved baseline. Runs fresh audits, loads the previous baseline from .uimax-history.json, and shows what improved and what regressed. Use this after making changes to verify you improved the metrics you intended.

This tool is FREE — runs entirely within Claude Code.

check_budgetsA

Check if the current site meets performance budgets defined in .uimaxrc.json. Runs fresh audits and compares results against budget thresholds for Lighthouse scores, Web Vitals, accessibility violations, and code issues. Returns pass/fail with details of any exceeded budgets.

Configure budgets in .uimaxrc.json under the "budgets" key: { "budgets": { "lighthouse": { "performance": 90, "accessibility": 95 }, "webVitals": { "lcp": 2500, "cls": 0.1 }, "maxAccessibilityViolations": 0 } }

This tool is FREE — runs entirely within Claude Code.

capture_consoleA

Capture all console messages (log, warn, error, info, debug) during page load. Navigates to the URL, listens for console output and uncaught exceptions, then returns structured results with message counts by level. Useful for debugging runtime issues, detecting warnings, and finding errors that only appear in the browser console.

Note: Console messages may contain sensitive data (tokens, user info, etc.) — the output is returned unfiltered.

This tool is FREE — runs entirely within Claude Code.

capture_networkA

Capture all network requests during page load with status codes, sizes, timing, and resource types. Provides a summary with total requests, failed requests, total transfer size, and breakdown by resource type. Useful for finding failed API calls, slow requests, large assets, and understanding page load behavior.

This tool is FREE — runs entirely within Claude Code.

capture_errorsA

Capture JavaScript errors, uncaught exceptions, unhandled promise rejections, and failed resource loads (images, scripts, stylesheets, fonts) during page load. Returns structured error list with error kind, message, and source location. Useful for finding runtime JS errors and broken resources that affect user experience.

This tool is FREE — runs entirely within Claude Code.

navigateA

Navigate to a URL and return page info. Waits for network idle before returning. Returns the final URL, page title, HTTP status code, and a screenshot so you can visually verify the page loaded correctly.

Use this when you need to open a page before performing interactions, or to verify a page loads successfully.

clickA

Click an element by CSS selector. Returns a screenshot after the click so you can visually verify the result. Supports standard CSS selectors. If the page hasn't been navigated yet, provide a URL to navigate first.

type_textA

Type text into an input field or textarea by CSS selector. Returns a screenshot after typing so you can visually verify the result. Options to clear existing text first and press Enter after typing.

select_optionA

Select an option from a dropdown () element by value. Returns a screenshot after selection so you can visually verify the result.

scrollA

Scroll the page by a pixel amount or to a specific element. Returns a screenshot after scrolling so you can visually verify the new viewport position.

wait_forA

Wait for an element to appear in the DOM. Returns the element's tag name and text content when found. Use this to wait for dynamic content to load before interacting with it.

get_elementA

Get detailed information about a DOM element: tag name, text content, all attributes, bounding box, and computed styles (color, font, background, display, visibility). Returns a screenshot so you can visually identify the element in context.

get_review_historyA

View past UIMax reviews for this project. Shows when reviews were run, what scores were achieved, and how many issues were found. Use this to understand the project's frontend health over time.

This tool is FREE — runs entirely within Claude Code.

get_review_statsA

Get aggregate statistics across all UIMax reviews for this project. Shows total reviews, score trends, most common issues, and most problematic files.

This tool is FREE — runs entirely within Claude Code.

review_diffA

Compare two specific reviews to see what changed. Shows new issues, resolved issues, and score changes.

This tool is FREE — runs entirely within Claude Code.

verify_fixesA

Re-run the full audit pipeline after fixes are applied and compare against the original review. Shows a before/after Report Card with grade transitions, resolved issues count, and remaining issues. Closes the review-fix-verify loop.

Use this AFTER implementing fixes from a review_ui run. Pass the same URL and code directory. The tool re-audits everything and shows what improved.

This tool is FREE — runs entirely within Claude Code.

compare_sitesA

Competitive benchmarking: audit two URLs side-by-side and produce a comparison Report Card. Runs accessibility (axe-core), performance (Core Web Vitals), and SEO audits on both sites concurrently. Returns screenshots of both sites plus a grade comparison table showing which site wins in each category.

Use this when the user wants to benchmark their site against a competitor, compare staging vs production, or evaluate two different sites.

This tool is FREE — runs entirely within Claude Code.

Prompts

Interactive templates invoked by user choice

NameDescription
ui-reviewComprehensive UI review methodology. Use this prompt after running the review_ui tool to get expert-level analysis of the collected data.
responsive-reviewResponsive design review methodology. Use after capturing responsive_screenshots to analyze layout across mobile, tablet, and desktop.
quick-design-reviewQuick design-only review. Use after taking a screenshot when you just want visual/UX feedback without code analysis.
semantic-compareSemantic visual comparison methodology. Use after running the semantic_compare tool to guide analysis of whether UI changes match the intended design request.

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/prembobby39-gif/uimax-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server