Skip to main content
Glama

runPerformanceAudit

Analyze webpage performance metrics to identify optimization opportunities and improve loading speed.

Instructions

Run a performance audit on the current page

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Core handler function that executes the performance audit using Lighthouse and processes the results into an AI-optimized report format.
    export async function runPerformanceAudit(
      url: string
    ): Promise<AIOptimizedPerformanceReport> {
      try {
        const lhr = await runLighthouseAudit(url, [AuditCategory.PERFORMANCE]);
        return extractAIOptimizedData(lhr, url);
      } catch (error) {
        throw new Error(
          `Performance audit failed: ${
            error instanceof Error ? error.message : String(error)
          }`
        );
      }
    }
  • MCP server tool registration for 'runPerformanceAudit', which proxies requests to the browser-tools-server's /performance-audit endpoint.
    server.tool(
      "runPerformanceAudit",
      "Run a performance audit on the current page",
      {},
      async () => {
        return await withServerConnection(async () => {
          try {
            // Simplified approach - let the browser connector handle the current tab and URL
            console.log(
              `Sending POST request to http://${discoveredHost}:${discoveredPort}/performance-audit`
            );
            const response = await fetch(
              `http://${discoveredHost}:${discoveredPort}/performance-audit`,
              {
                method: "POST",
                headers: {
                  "Content-Type": "application/json",
                  Accept: "application/json",
                },
                body: JSON.stringify({
                  category: AuditCategory.PERFORMANCE,
                  source: "mcp_tool",
                  timestamp: Date.now(),
                }),
              }
            );
    
            // Log the response status
            console.log(`Performance audit response status: ${response.status}`);
    
            if (!response.ok) {
              const errorText = await response.text();
              console.error(`Performance audit error: ${errorText}`);
              throw new Error(`Server returned ${response.status}: ${errorText}`);
            }
    
            const json = await response.json();
    
            // flatten it by merging metadata with the report contents
            if (json.report) {
              const { metadata, report } = json;
              const flattened = {
                ...metadata,
                ...report,
              };
    
              return {
                content: [
                  {
                    type: "text",
                    text: JSON.stringify(flattened, null, 2),
                  },
                ],
              };
            } else {
              // Return as-is if it's not in the new format
              return {
                content: [
                  {
                    type: "text",
                    text: JSON.stringify(json, null, 2),
                  },
                ],
              };
            }
          } catch (error) {
            const errorMessage =
              error instanceof Error ? error.message : String(error);
            console.error("Error in performance audit:", errorMessage);
            return {
              content: [
                {
                  type: "text",
                  text: `Failed to run performance audit: ${errorMessage}`,
                },
              ],
            };
          }
        });
      }
    );
  • HTTP endpoint handler setup for /performance-audit (invoked via setupPerformanceAudit()), which calls the runPerformanceAudit lighthouse function.
    private setupAuditEndpoint(
      auditType: string,
      endpoint: string,
      auditFunction: (url: string) => Promise<LighthouseReport>
    ) {
      // Add server identity validation endpoint
      this.app.get("/.identity", (req, res) => {
        res.json({
          signature: "mcp-browser-connector-24x7",
          version: "1.2.0",
        });
      });
    
      this.app.post(endpoint, async (req: any, res: any) => {
        try {
          console.log(`${auditType} audit request received`);
    
          // Get URL using our helper method
          const url = await this.getUrlForAudit();
    
          if (!url) {
            console.log(`No URL available for ${auditType} audit`);
            return res.status(400).json({
              error: `URL is required for ${auditType} audit. Make sure you navigate to a page in the browser first, and the browser-tool extension tab is open.`,
            });
          }
    
          // If we're using the stored URL (not from request body), log it now
          if (!req.body?.url && url === currentUrl) {
            console.log(`Using stored URL for ${auditType} audit:`, url);
          }
    
          // Check if we're using the default URL
          if (url === "about:blank") {
            console.log(`Cannot run ${auditType} audit on about:blank`);
            return res.status(400).json({
              error: `Cannot run ${auditType} audit on about:blank`,
            });
          }
    
          console.log(`Preparing to run ${auditType} audit for: ${url}`);
    
          // Run the audit using the provided function
          try {
            const result = await auditFunction(url);
    
            console.log(`${auditType} audit completed successfully`);
            // Return the results
            res.json(result);
          } catch (auditError) {
            console.error(`${auditType} audit failed:`, auditError);
            const errorMessage =
              auditError instanceof Error
                ? auditError.message
                : String(auditError);
            res.status(500).json({
              error: `Failed to run ${auditType} audit: ${errorMessage}`,
            });
          }
        } catch (error) {
          console.error(`Error in ${auditType} audit endpoint:`, error);
          const errorMessage =
            error instanceof Error ? error.message : String(error);
          res.status(500).json({
            error: `Error in ${auditType} audit endpoint: ${errorMessage}`,
          });
        }
      });
    }
  • Core helper that launches a headless browser instance and runs the Lighthouse audit engine.
    export async function runLighthouseAudit(
      url: string,
      categories: string[]
    ): Promise<LighthouseResult> {
      console.log(`Starting Lighthouse ${categories.join(", ")} audit for: ${url}`);
    
      if (!url || url === "about:blank") {
        console.error("Invalid URL for Lighthouse audit");
        throw new Error(
          "Cannot run audit on an empty page or about:blank. Please navigate to a valid URL first."
        );
      }
    
      try {
        // Always use a dedicated headless browser for audits
        console.log("Using dedicated headless browser for audit");
    
        // Determine if this is a performance audit - we need to load all resources for performance audits
        const isPerformanceAudit = categories.includes(AuditCategory.PERFORMANCE);
    
        // For performance audits, we want to load all resources
        // For accessibility or other audits, we can block non-essential resources
        try {
          const { port } = await connectToHeadlessBrowser(url, {
            blockResources: !isPerformanceAudit,
          });
    
          console.log(`Connected to browser on port: ${port}`);
    
          // Create Lighthouse config
          const { flags, config } = createLighthouseConfig(categories);
          flags.port = port;
    
          console.log(
            `Running Lighthouse with categories: ${categories.join(", ")}`
          );
          const runnerResult = await lighthouse(url, flags as Flags, config);
          console.log("Lighthouse scan completed");
    
          if (!runnerResult?.lhr) {
            console.error("Lighthouse audit failed to produce results");
            throw new Error("Lighthouse audit failed to produce results");
          }
    
          // Schedule browser cleanup after a delay to allow for subsequent audits
          scheduleBrowserCleanup();
    
          // Return the result
          const result = runnerResult.lhr;
    
          return result;
        } catch (browserError) {
          // Check if the error is related to Chrome/Edge not being available
          const errorMessage =
            browserError instanceof Error
              ? browserError.message
              : String(browserError);
          if (
            errorMessage.includes("Chrome could not be found") ||
            errorMessage.includes("Failed to launch browser") ||
            errorMessage.includes("spawn ENOENT")
          ) {
            throw new Error(
              "Chrome or Edge browser could not be found. Please ensure that Chrome or Edge is installed on your system to run audits."
            );
          }
          // Re-throw other errors
          throw browserError;
        }
      } catch (error) {
        console.error("Lighthouse audit failed:", error);
        // Schedule browser cleanup even if the audit fails
        scheduleBrowserCleanup();
        throw new Error(
          `Lighthouse audit failed: ${
            error instanceof Error ? error.message : String(error)
          }`
        );
      }
    }
  • Type definitions for the AI-optimized performance report structure returned by the tool.
    export interface PerformanceReportContent {
      score: number; // Overall score (0-100)
      audit_counts: {
        // Counts of different audit types
        failed: number;
        passed: number;
        manual: number;
        informative: number;
        not_applicable: number;
      };
      metrics: AIOptimizedMetric[];
      opportunities: AIOptimizedOpportunity[];
      page_stats?: AIPageStats; // Optional page statistics
      prioritized_recommendations?: string[]; // Ordered list of recommendations
    }
    
    /**
     * Full performance report implementing the base LighthouseReport interface
     */
    export type AIOptimizedPerformanceReport =
      LighthouseReport<PerformanceReportContent>;
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states what the tool does but provides no information about what the audit entails, what metrics are measured, whether it requires specific page conditions, what the output format might be, or any side effects. The description is minimal and lacks behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that states the core functionality without any wasted words. It's appropriately sized for a zero-parameter tool and gets straight to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of performance auditing, the lack of annotations, and no output schema, the description is insufficient. It doesn't explain what a 'performance audit' means in this context, what gets measured, what the expected output might be, or how it differs from other audit tools. For a specialized tool with rich siblings, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters with 100% schema description coverage, so the baseline is 4. The description doesn't need to compensate for missing parameter documentation since there are no parameters to document. The description appropriately doesn't mention parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('run a performance audit') and target ('on the current page'), providing a specific verb+resource combination. However, it doesn't differentiate from sibling audit tools like runAccessibilityAudit, runBestPracticesAudit, runSEOAudit, or runNextJSAudit, which all follow the same 'run X audit on current page' pattern.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided about when to use this tool versus alternatives. The description doesn't mention what constitutes a performance audit, when it's appropriate, or how it differs from other audit tools in the sibling list. There's no context about prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Sugatraj/Cursor-Browser-Tools-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server