Skip to main content
Glama

runBestPracticesAudit

Audit web pages for best practices to identify performance, accessibility, and SEO issues that need improvement.

Instructions

Run a best practices audit on the current page

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The primary handler function that executes the Lighthouse best practices audit by calling runLighthouseAudit and processing results with extractAIOptimizedData.
    export async function runBestPracticesAudit(
      url: string
    ): Promise<AIOptimizedBestPracticesReport> {
      try {
        const lhr = await runLighthouseAudit(url, [AuditCategory.BEST_PRACTICES]);
        return extractAIOptimizedData(lhr, url);
      } catch (error) {
        throw new Error(
          `Best Practices audit failed: ${
            error instanceof Error ? error.message : String(error)
          }`
        );
      }
    }
  • MCP server registration of the 'runBestPracticesAudit' tool, which proxies requests to the browser-tools-server endpoint /best-practices-audit.
    server.tool(
      "runBestPracticesAudit",
      "Run a best practices audit on the current page",
      {},
      async () => {
        return await withServerConnection(async () => {
          try {
            console.log(
              `Sending POST request to http://${discoveredHost}:${discoveredPort}/best-practices-audit`
            );
            const response = await fetch(
              `http://${discoveredHost}:${discoveredPort}/best-practices-audit`,
              {
                method: "POST",
                headers: {
                  "Content-Type": "application/json",
                  Accept: "application/json",
                },
                body: JSON.stringify({
                  source: "mcp_tool",
                  timestamp: Date.now(),
                }),
              }
            );
    
            // Check for errors
            if (!response.ok) {
              const errorText = await response.text();
              throw new Error(`Server returned ${response.status}: ${errorText}`);
            }
    
            const json = await response.json();
    
            // flatten it by merging metadata with the report contents
            if (json.report) {
              const { metadata, report } = json;
              const flattened = {
                ...metadata,
                ...report,
              };
    
              return {
                content: [
                  {
                    type: "text",
                    text: JSON.stringify(flattened, null, 2),
                  },
                ],
              };
            } else {
              // Return as-is if it's not in the new format
              return {
                content: [
                  {
                    type: "text",
                    text: JSON.stringify(json, null, 2),
                  },
                ],
              };
            }
          } catch (error) {
            const errorMessage =
              error instanceof Error ? error.message : String(error);
            console.error("Error in Best Practices audit:", errorMessage);
            return {
              content: [
                {
                  type: "text",
                  text: `Failed to run Best Practices audit: ${errorMessage}`,
                },
              ],
            };
          }
        });
      }
    );
  • Type definitions for the Best Practices report structure, including content schema and AI-optimized report type.
    export interface BestPracticesReportContent {
      score: number; // Overall score (0-100)
      audit_counts: {
        // Counts of different audit types
        failed: number;
        passed: number;
        manual: number;
        informative: number;
        not_applicable: number;
      };
      issues: AIBestPracticesIssue[];
      categories: {
        [category: string]: {
          score: number;
          issues_count: number;
        };
      };
      prioritized_recommendations?: string[]; // Ordered list of recommendations
    }
    
    /**
     * Full Best Practices report implementing the base LighthouseReport interface
     */
    export type AIOptimizedBestPracticesReport =
      LighthouseReport<BestPracticesReportContent>;
  • Helper function that transforms raw Lighthouse results into the AI-optimized Best Practices report format, including issue categorization, scoring, and recommendations.
    const extractAIOptimizedData = (
      lhr: LighthouseResult,
      url: string
    ): AIOptimizedBestPracticesReport => {
      const categoryData = lhr.categories[AuditCategory.BEST_PRACTICES];
      const audits = lhr.audits || {};
    
      // Add metadata
      const metadata = {
        url,
        timestamp: lhr.fetchTime || new Date().toISOString(),
        device: lhr.configSettings?.formFactor || "desktop",
        lighthouseVersion: lhr.lighthouseVersion || "unknown",
      };
    
      // Process audit results
      const issues: AIBestPracticesIssue[] = [];
      const categories: { [key: string]: { score: number; issues_count: number } } =
        {
          security: { score: 0, issues_count: 0 },
          trust: { score: 0, issues_count: 0 },
          "user-experience": { score: 0, issues_count: 0 },
          "browser-compat": { score: 0, issues_count: 0 },
          other: { score: 0, issues_count: 0 },
        };
    
      // Counters for audit types
      let failedCount = 0;
      let passedCount = 0;
      let manualCount = 0;
      let informativeCount = 0;
      let notApplicableCount = 0;
    
      // Process failed audits (score < 1)
      const failedAudits = Object.entries(audits)
        .filter(([, audit]) => {
          const score = audit.score;
          return (
            score !== null &&
            score < 1 &&
            audit.scoreDisplayMode !== "manual" &&
            audit.scoreDisplayMode !== "notApplicable"
          );
        })
        .map(([auditId, audit]) => ({ auditId, ...audit }));
    
      // Update counters
      Object.values(audits).forEach((audit) => {
        const { score, scoreDisplayMode } = audit;
    
        if (scoreDisplayMode === "manual") {
          manualCount++;
        } else if (scoreDisplayMode === "informative") {
          informativeCount++;
        } else if (scoreDisplayMode === "notApplicable") {
          notApplicableCount++;
        } else if (score === 1) {
          passedCount++;
        } else if (score !== null && score < 1) {
          failedCount++;
        }
      });
    
      // Process failed audits into AI-friendly format
      failedAudits.forEach((ref: any) => {
        // Determine impact level based on audit score and weight
        let impact: "critical" | "serious" | "moderate" | "minor" = "moderate";
        const score = ref.score || 0;
    
        // Use a more reliable approach to determine impact
        if (score === 0) {
          impact = "critical";
        } else if (score < 0.5) {
          impact = "serious";
        } else if (score < 0.9) {
          impact = "moderate";
        } else {
          impact = "minor";
        }
    
        // Categorize the issue
        let category = "other";
    
        // Security-related issues
        if (
          ref.auditId.includes("csp") ||
          ref.auditId.includes("security") ||
          ref.auditId.includes("vulnerab") ||
          ref.auditId.includes("password") ||
          ref.auditId.includes("cert") ||
          ref.auditId.includes("deprecat")
        ) {
          category = "security";
        }
        // Trust and legitimacy issues
        else if (
          ref.auditId.includes("doctype") ||
          ref.auditId.includes("charset") ||
          ref.auditId.includes("legit") ||
          ref.auditId.includes("trust")
        ) {
          category = "trust";
        }
        // User experience issues
        else if (
          ref.auditId.includes("user") ||
          ref.auditId.includes("experience") ||
          ref.auditId.includes("console") ||
          ref.auditId.includes("errors") ||
          ref.auditId.includes("paste")
        ) {
          category = "user-experience";
        }
        // Browser compatibility issues
        else if (
          ref.auditId.includes("compat") ||
          ref.auditId.includes("browser") ||
          ref.auditId.includes("vendor") ||
          ref.auditId.includes("js-lib")
        ) {
          category = "browser-compat";
        }
    
        // Count issues by category
        categories[category].issues_count++;
    
        // Create issue object
        const issue: AIBestPracticesIssue = {
          id: ref.auditId,
          title: ref.title,
          impact,
          category,
          score: ref.score,
          details: [],
        };
    
        // Extract details if available
        const refDetails = ref.details as BestPracticesAuditDetails | undefined;
        if (refDetails?.items && Array.isArray(refDetails.items)) {
          const itemLimit = DETAIL_LIMITS[impact];
          const detailItems = refDetails.items.slice(0, itemLimit);
    
          detailItems.forEach((item: Record<string, unknown>) => {
            issue.details = issue.details || [];
    
            // Different audits have different detail structures
            const detail: Record<string, string> = {};
    
            if (typeof item.name === "string") detail.name = item.name;
            if (typeof item.version === "string") detail.version = item.version;
            if (typeof item.issue === "string") detail.issue = item.issue;
            if (item.value !== undefined) detail.value = String(item.value);
    
            // For JS libraries, extract name and version
            if (
              ref.auditId === "js-libraries" &&
              typeof item.name === "string" &&
              typeof item.version === "string"
            ) {
              detail.name = item.name;
              detail.version = item.version;
            }
    
            // Add other generic properties that might exist
            for (const [key, value] of Object.entries(item)) {
              if (!detail[key] && typeof value === "string") {
                detail[key] = value;
              }
            }
    
            issue.details.push(detail as any);
          });
        }
    
        issues.push(issue);
      });
    
      // Calculate category scores (0-100)
      Object.keys(categories).forEach((category) => {
        // Simplified scoring: if there are issues in this category, score is reduced proportionally
        const issueCount = categories[category].issues_count;
        if (issueCount > 0) {
          // More issues = lower score, max penalty of 25 points per issue
          const penalty = Math.min(100, issueCount * 25);
          categories[category].score = Math.max(0, 100 - penalty);
        } else {
          categories[category].score = 100;
        }
      });
    
      // Generate prioritized recommendations
      const prioritized_recommendations: string[] = [];
    
      // Prioritize recommendations by category with most issues
      Object.entries(categories)
        .filter(([_, data]) => data.issues_count > 0)
        .sort(([_, a], [__, b]) => b.issues_count - a.issues_count)
        .forEach(([category, data]) => {
          let recommendation = "";
    
          switch (category) {
            case "security":
              recommendation = `Address ${data.issues_count} security issues: vulnerabilities, CSP, deprecations`;
              break;
            case "trust":
              recommendation = `Fix ${data.issues_count} trust & legitimacy issues: doctype, charset`;
              break;
            case "user-experience":
              recommendation = `Improve ${data.issues_count} user experience issues: console errors, user interactions`;
              break;
            case "browser-compat":
              recommendation = `Resolve ${data.issues_count} browser compatibility issues: outdated libraries, vendor prefixes`;
              break;
            default:
              recommendation = `Fix ${data.issues_count} other best practice issues`;
          }
    
          prioritized_recommendations.push(recommendation);
        });
    
      // Return the optimized report
      return {
        metadata,
        report: {
          score: categoryData?.score ? Math.round(categoryData.score * 100) : 0,
          audit_counts: {
            failed: failedCount,
            passed: passedCount,
            manual: manualCount,
            informative: informativeCount,
            not_applicable: notApplicableCount,
          },
          issues,
          categories,
          prioritized_recommendations,
        },
      };
    };
  • Core helper function that runs the actual Lighthouse audit using a dedicated headless browser instance.
    export async function runLighthouseAudit(
      url: string,
      categories: string[]
    ): Promise<LighthouseResult> {
      console.log(`Starting Lighthouse ${categories.join(", ")} audit for: ${url}`);
    
      if (!url || url === "about:blank") {
        console.error("Invalid URL for Lighthouse audit");
        throw new Error(
          "Cannot run audit on an empty page or about:blank. Please navigate to a valid URL first."
        );
      }
    
      try {
        // Always use a dedicated headless browser for audits
        console.log("Using dedicated headless browser for audit");
    
        // Determine if this is a performance audit - we need to load all resources for performance audits
        const isPerformanceAudit = categories.includes(AuditCategory.PERFORMANCE);
    
        // For performance audits, we want to load all resources
        // For accessibility or other audits, we can block non-essential resources
        try {
          const { port } = await connectToHeadlessBrowser(url, {
            blockResources: !isPerformanceAudit,
          });
    
          console.log(`Connected to browser on port: ${port}`);
    
          // Create Lighthouse config
          const { flags, config } = createLighthouseConfig(categories);
          flags.port = port;
    
          console.log(
            `Running Lighthouse with categories: ${categories.join(", ")}`
          );
          const runnerResult = await lighthouse(url, flags as Flags, config);
          console.log("Lighthouse scan completed");
    
          if (!runnerResult?.lhr) {
            console.error("Lighthouse audit failed to produce results");
            throw new Error("Lighthouse audit failed to produce results");
          }
    
          // Schedule browser cleanup after a delay to allow for subsequent audits
          scheduleBrowserCleanup();
    
          // Return the result
          const result = runnerResult.lhr;
    
          return result;
        } catch (browserError) {
          // Check if the error is related to Chrome/Edge not being available
          const errorMessage =
            browserError instanceof Error
              ? browserError.message
              : String(browserError);
          if (
            errorMessage.includes("Chrome could not be found") ||
            errorMessage.includes("Failed to launch browser") ||
            errorMessage.includes("spawn ENOENT")
          ) {
            throw new Error(
              "Chrome or Edge browser could not be found. Please ensure that Chrome or Edge is installed on your system to run audits."
            );
          }
          // Re-throw other errors
          throw browserError;
        }
      } catch (error) {
        console.error("Lighthouse audit failed:", error);
        // Schedule browser cleanup even if the audit fails
        scheduleBrowserCleanup();
        throw new Error(
          `Lighthouse audit failed: ${
            error instanceof Error ? error.message : String(error)
          }`
        );
      }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'run a best practices audit' implies a read-only analysis operation, the description doesn't specify what happens during execution (does it block the page? how long does it take?), what permissions are needed, what kind of output to expect, or whether it has any side effects on the page state.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise - a single sentence that communicates the essential action and scope without any wasted words. It's front-loaded with the core functionality and doesn't include unnecessary elaboration or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For an audit tool with no annotations and no output schema, the description is insufficient. It doesn't explain what 'best practices' means in this context, what standards or criteria are used, what format the results will be in, or how comprehensive the audit is. Given the complexity implied by 'best practices audit' and the lack of structured documentation, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the baseline is 4. The description appropriately doesn't discuss parameters since none exist, and it correctly focuses on the tool's purpose rather than attempting to describe non-existent inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('run a best practices audit') and target ('on the current page'), providing a specific verb+resource combination. However, it doesn't differentiate this tool from sibling audit tools like 'runAccessibilityAudit', 'runPerformanceAudit', or 'runSEOAudit', which would require specifying what type of best practices it covers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance - it only indicates the tool should be used on 'the current page'. There's no explicit guidance about when to use this tool versus other audit tools (like accessibility or performance audits), no prerequisites mentioned, and no exclusions or alternatives provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Sugatraj/Cursor-Browser-Tools-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server