Skip to main content
Glama

performance_audit

Analyze web page performance by measuring Core Web Vitals, load times, resource usage, and JavaScript heap to identify optimization opportunities.

Instructions

Measure Core Web Vitals and performance metrics: FCP, LCP, CLS, TBT, load time, resource count, DOM size, and JS heap usage.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesURL of the page to measure

Implementation Reference

  • The `measurePerformance` function is the core handler for the `performance_audit` tool. It initializes a browser page, injects performance observers for Core Web Vitals (LCP, CLS, TBT), navigates to the target URL, and calculates page load metrics including DOM node count and JS heap size.
    export async function measurePerformance(
      url: string
    ): Promise<PerformanceMetrics> {
      const page = await createPage(1440, 900);
    
      try {
        // Set up performance observers before navigation
        await page.evaluateOnNewDocument(() => {
          window.__perfMetrics = {
            lcp: null,
            cls: 0,
            tbt: 0,
            fid: null,
          };
    
          // Largest Contentful Paint
          const lcpObserver = new PerformanceObserver((list) => {
            const entries = list.getEntries();
            const lastEntry = entries[entries.length - 1];
            if (lastEntry) {
              window.__perfMetrics.lcp = lastEntry.startTime;
            }
          });
          lcpObserver.observe({ type: "largest-contentful-paint", buffered: true });
    
          // Cumulative Layout Shift
          const clsObserver = new PerformanceObserver((list) => {
            for (const entry of list.getEntries()) {
              // @ts-expect-error LayoutShift API
              if (!entry.hadRecentInput) {
                // @ts-expect-error LayoutShift API
                window.__perfMetrics.cls += entry.value;
              }
            }
          });
          clsObserver.observe({ type: "layout-shift", buffered: true });
    
          // Long Tasks (for TBT approximation)
          const longTaskObserver = new PerformanceObserver((list) => {
            for (const entry of list.getEntries()) {
              const blockingTime = entry.duration - 50;
              if (blockingTime > 0) {
                window.__perfMetrics.tbt += blockingTime;
              }
            }
          });
          longTaskObserver.observe({ type: "longtask", buffered: true });
        });
    
        // Navigate
        const startTime = Date.now();
        await page.goto(url, {
          waitUntil: "networkidle2",
          timeout: 30000,
        });
    
        // Wait a bit for metrics to settle
        await new Promise((resolve) => setTimeout(resolve, 3000));
    
        // Collect all metrics
        const metrics = await page.evaluate(() => {
          const navigation = performance.getEntriesByType(
            "navigation"
          )[0] as PerformanceNavigationTiming;
    
          const paintEntries = performance.getEntriesByType("paint");
          const fpEntry = paintEntries.find((e) => e.name === "first-paint");
          const fcpEntry = paintEntries.find(
            (e) => e.name === "first-contentful-paint"
          );
    
          // Resource metrics
          const resources = performance.getEntriesByType(
            "resource"
          ) as PerformanceResourceTiming[];
          const totalResourceSize = resources.reduce(
            (sum, r) => sum + (r.transferSize || 0),
            0
          );
    
          // DOM metrics
          const domNodes = document.querySelectorAll("*").length;
    
          // Memory (Chrome only)
          // @ts-expect-error Chrome-specific API
          const memInfo = performance.memory;
    
          return {
            loadTime: navigation
              ? navigation.loadEventEnd - navigation.startTime
              : 0,
            domContentLoaded: navigation
              ? navigation.domContentLoadedEventEnd - navigation.startTime
              : 0,
            firstPaint: fpEntry?.startTime ?? null,
            firstContentfulPaint: fcpEntry?.startTime ?? null,
            largestContentfulPaint: window.__perfMetrics?.lcp ?? null,
            cumulativeLayoutShift: window.__perfMetrics?.cls ?? null,
            totalBlockingTime: window.__perfMetrics?.tbt ?? null,
            domNodes,
            resourceCount: resources.length,
            totalResourceSize,
            jsHeapSize: memInfo?.usedJSHeapSize ?? null,
          };
        });
    
        return {
          url,
          timestamp: new Date().toISOString(),
          ...metrics,
        };
      } finally {
        await closePage(page);
      }
    }
  • src/server.ts:529-556 (registration)
    The `performance_audit` tool is registered in `src/server.ts` using `server.tool`. It takes a `url` as input and calls `measurePerformance` from `src/tools/performance.ts`.
      "performance_audit",
      "Measure Core Web Vitals and performance metrics: FCP, LCP, CLS, TBT, load time, resource count, DOM size, and JS heap usage.",
      {
        url: z.string().url().describe("URL of the page to measure"),
      },
      async ({ url }) => {
        try {
          const result = await measurePerformance(url);
          const report = formatPerformanceReport(result);
    
          return {
            content: [
              { type: "text" as const, text: report },
              {
                type: "text" as const,
                text: `\n\n<raw_data>\n${JSON.stringify(result, null, 2)}\n</raw_data>`,
              },
            ],
          };
        } catch (error) {
          const message = error instanceof Error ? error.message : String(error);
          return {
            content: [{ type: "text" as const, text: `Performance audit failed: ${message}` }],
            isError: true,
          };
        }
      }
    );
  • The `formatPerformanceReport` function in `src/tools/performance.ts` is a helper utility that processes the raw `PerformanceMetrics` object into a readable markdown report used by the `performance_audit` tool.
    export function formatPerformanceReport(metrics: PerformanceMetrics): string {
      const rating = (
        value: number | null,
        good: number,
        poor: number
      ): string => {
        if (value === null) return "N/A";
        if (value <= good) return `${value.toFixed(0)}ms (Good)`;
        if (value <= poor) return `${value.toFixed(0)}ms (Needs Improvement)`;
        return `${value.toFixed(0)}ms (Poor)`;
      };
    
      const clsRating = (value: number | null): string => {
        if (value === null) return "N/A";
        if (value <= 0.1) return `${value.toFixed(3)} (Good)`;
        if (value <= 0.25) return `${value.toFixed(3)} (Needs Improvement)`;
        return `${value.toFixed(3)} (Poor)`;
      };
    
      const formatBytes = (bytes: number): string => {
        if (bytes < 1024) return `${bytes} B`;
        if (bytes < 1048576) return `${(bytes / 1024).toFixed(1)} KB`;
        return `${(bytes / 1048576).toFixed(1)} MB`;
      };
    
      return [
        `## Performance Metrics`,
        ``,
        `**URL:** ${metrics.url}`,
        `**Measured:** ${metrics.timestamp}`,
        ``,
        `### Core Web Vitals`,
        `| Metric | Value | Rating |`,
        `|--------|-------|--------|`,
        `| First Contentful Paint (FCP) | ${rating(metrics.firstContentfulPaint, 1800, 3000)} |`,
        `| Largest Contentful Paint (LCP) | ${rating(metrics.largestContentfulPaint, 2500, 4000)} |`,
        `| Cumulative Layout Shift (CLS) | ${clsRating(metrics.cumulativeLayoutShift)} |`,
        `| Total Blocking Time (TBT) | ${rating(metrics.totalBlockingTime, 200, 600)} |`,
        ``,
        `### Page Metrics`,
        `- **Load Time:** ${metrics.loadTime.toFixed(0)}ms`,
        `- **DOM Content Loaded:** ${metrics.domContentLoaded.toFixed(0)}ms`,
        `- **DOM Nodes:** ${metrics.domNodes}`,
        `- **Resources:** ${metrics.resourceCount}`,
        `- **Total Transfer Size:** ${formatBytes(metrics.totalResourceSize)}`,
        metrics.jsHeapSize
          ? `- **JS Heap Size:** ${formatBytes(metrics.jsHeapSize)}`
          : "",
      ]
        .filter(Boolean)
        .join("\n");
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses what gets measured (the specific metrics list), but fails to clarify safety profile (read-only vs destructive), execution time expectations, or whether this runs a headless browser simulation. 'Measure' implies read-only, but explicit confirmation is absent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence construction with zero waste. Front-loaded with the action verb 'Measure' followed by the specific metric categories and a complete enumeration of measured values. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema, the description adequately compensates by listing the specific metrics returned (FCP, LCL, etc.). Missing output schema disclosure and safety annotations prevent a perfect score, but sufficient for agent selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for the single 'url' parameter. The description focuses entirely on output metrics rather than input semantics, but since the schema fully documents the parameter, the baseline score of 3 is appropriate without penalty.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Measure) and target (Core Web Vitals and performance metrics). It distinguishes from siblings like lighthouse_audit and accessibility_audit by enumerating specific technical metrics (FCP, LCP, CLS, TBT, etc.), signaling this is a focused performance measurement tool rather than a general audit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance through the specificity of metrics listed (Core Web Vitals), suggesting use when these specific performance indicators are needed. However, it lacks explicit guidance on when to choose this over the sibling lighthouse_audit tool, or prerequisites like requiring a public URL.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/prembobby39-gif/uimax-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server