Skip to main content
Glama

lighthouse_audit

Run comprehensive Lighthouse audits to measure website performance, accessibility, best practices, and SEO scores. Get detailed findings on render-blocking resources, image optimization, and unused code for actionable improvements.

Instructions

Run a full Lighthouse audit against a URL. Returns scores for Performance, Accessibility, Best Practices, and SEO (0-100), plus detailed audit findings for render-blocking resources, image optimization, unused code, and more. Heavier than performance_audit but provides industry-standard Lighthouse scores.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesURL of the page to audit (e.g., http://localhost:3000)

Implementation Reference

  • The MCP tool registration and handler for 'lighthouse_audit'.
    server.tool(
      "lighthouse_audit",
      "Run a full Lighthouse audit against a URL. Returns scores for Performance, Accessibility, Best Practices, and SEO (0-100), plus detailed audit findings for render-blocking resources, image optimization, unused code, and more. Heavier than performance_audit but provides industry-standard Lighthouse scores.",
      {
        url: z.string().url().describe("URL of the page to audit (e.g., http://localhost:3000)"),
      },
      async ({ url }) => {
        try {
          const result = await runLighthouse(url);
          const report = formatLighthouseReport(result);
    
          return {
            content: [
              { type: "text" as const, text: report },
              {
                type: "text" as const,
                text: `\n\n<raw_data>\n${JSON.stringify(result, null, 2)}\n</raw_data>`,
              },
            ],
          };
        } catch (error) {
          const message = error instanceof Error ? error.message : String(error);
          return {
            content: [{ type: "text" as const, text: `Lighthouse audit failed: ${message}` }],
            isError: true,
          };
        }
      }
    );
  • The primary logic implementation for running the Lighthouse audit.
    export async function runLighthouse(
      url: string,
      options?: { readonly chromePath?: string }
    ): Promise<LighthouseResult> {
  • Type definition for the Lighthouse audit results.
    export interface LighthouseResult {
      readonly scores: LighthouseScores;
      readonly audits: readonly LighthouseAudit[];
      readonly url: string;
      readonly timestamp: string;
      readonly lighthouseVersion: string;
      readonly runWarnings: readonly string[];
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the return format (scores 0-100, detailed findings categories) and operational weight ('Heavier'), but omits critical operational details like timeout behavior, authentication requirements, browser launch behavior, or error handling strategies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two highly efficient sentences with zero redundancy. The first sentence front-loads the core action and return value structure; the second provides essential comparative context. Every clause earns its place by conveying distinct operational or comparative information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately compensates by detailing the return structure (four score categories, specific audit findings). It covers the essential selection context (comparison to performance_audit) and input requirements. Minor deductions for lacking error condition documentation or specific Lighthouse version information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for its single parameter ('URL of the page to audit'). The description references 'against a URL' but adds no additional semantic value regarding validation rules, supported protocols, or environmental constraints beyond what the schema already provides, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Run a full Lighthouse audit'), target resource ('URL'), and scope ('full' audit). It explicitly distinguishes from the sibling tool 'performance_audit' by noting it is 'Heavier' but provides 'industry-standard Lighthouse scores,' giving the agent clear selection criteria.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implicit usage guidance through explicit comparison to the sibling 'performance_audit' ('Heavier than performance_audit'), helping the agent understand the trade-off between resource cost and output standardization. However, it lacks explicit 'when not to use' guidance or comparisons to other siblings like 'quick_review' or 'accessibility_audit'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/prembobby39-gif/uimax-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server