Skip to main content
Glama

validate_performance_webpagetest

Analyze website performance using WebPageTest to measure loading speed, identify bottlenecks, and optimize user experience through automated testing.

Instructions

Analyze website performance using WebPageTest via browser automation. Free 300 tests/month. Returns test ID immediately or waits for full results.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to analyze
locationNoTest location (e.g., Dulles:Chrome)
runsNoNumber of test runs (default: 1)
waitForResultsNoWait for test to complete (default: false, returns test ID immediately)
timeoutNoTimeout in milliseconds (default: 300000 = 5 minutes)

Implementation Reference

  • Primary handler function that executes the WebPageTest tool logic: submits test via Playwright automation, optionally waits and extracts performance metrics and grades.
    export async function analyzeWebPageTest(
      url: string,
      options: WebPageTestOptions = {}
    ): Promise<WebPageTestResult> {
      try {
        // Submit test
        const { testId, resultsUrl } = await submitWebPageTest(url, options);
    
        // If not waiting for results, return immediately with test ID
        if (!options.waitForResults) {
          return {
            tool: 'webpagetest',
            success: true,
            url,
            test_id: testId,
            results_url: resultsUrl,
            status: 'pending',
          };
        }
    
        // Wait for results
        const metrics = await waitForWebPageTestResults(testId, options.timeout);
        const grades = await getWebPageTestGrades(testId);
    
        return {
          tool: 'webpagetest',
          success: true,
          url,
          test_id: testId,
          results_url: resultsUrl,
          summary: metrics,
          performance_grade: grades.performance,
          security_grade: grades.security,
          status: 'complete',
        };
      } catch (error) {
        return {
          tool: 'webpagetest',
          success: false,
          url,
          status: 'error',
          error: error instanceof Error ? error.message : String(error),
        };
      }
    }
  • Zod input validation schema for the validate_performance_webpagetest tool arguments.
    const WebPageTestArgsSchema = z.object({
      url: z.string().url(),
      location: z.string().optional(),
      runs: z.number().optional(),
      waitForResults: z.boolean().optional(),
      timeout: z.number().optional(),
    });
  • index.ts:158-172 (registration)
    MCP tool registration including name, description, and JSON schema for inputs.
    {
      name: 'validate_performance_webpagetest',
      description: 'Analyze website performance using WebPageTest via browser automation. Free 300 tests/month. Returns test ID immediately or waits for full results.',
      inputSchema: {
        type: 'object',
        properties: {
          url: { type: 'string', description: 'The URL to analyze' },
          location: { type: 'string', description: 'Test location (e.g., Dulles:Chrome)' },
          runs: { type: 'number', description: 'Number of test runs (default: 1)' },
          waitForResults: { type: 'boolean', description: 'Wait for test to complete (default: false, returns test ID immediately)' },
          timeout: { type: 'number', description: 'Timeout in milliseconds (default: 300000 = 5 minutes)' },
        },
        required: ['url'],
      },
    },
  • Server-side tool dispatcher case that validates arguments and calls the analyzeWebPageTest implementation.
    case 'validate_performance_webpagetest': {
      const validatedArgs = WebPageTestArgsSchema.parse(args);
      const result = await analyzeWebPageTest(validatedArgs.url, {
        location: validatedArgs.location,
        runs: validatedArgs.runs,
        waitForResults: validatedArgs.waitForResults,
        timeout: validatedArgs.timeout,
      });
      return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
    }
  • Helper function that uses Playwright to submit a performance test to the WebPageTest.org website and extracts the test ID.
    async function submitWebPageTest(
      url: string,
      options: WebPageTestOptions = {}
    ): Promise<{ testId: string; resultsUrl: string }> {
      const browserManager = await getBrowserManager();
      const page = await browserManager.newPage();
    
      try {
        const timeout = options.timeout || 300000; // 5 minutes default
    
        // Navigate to WebPageTest
        await page.goto('https://www.webpagetest.org/', { timeout });
    
        // Enter URL in the test input
        await page.fill('input[name="url"]', url);
    
        // Select location if provided (default is usually Dulles:Chrome)
        if (options.location) {
          await page.selectOption('select[name="location"]', options.location);
        }
    
        // Set number of runs if provided
        if (options.runs) {
          await page.fill('input[name="runs"]', options.runs.toString());
        }
    
        // Submit the test
        await Promise.all([
          page.waitForNavigation({ timeout }),
          page.click('input[type="submit"], button[type="submit"]'),
        ]);
    
        // Wait for redirect to results page
        await page.waitForURL(/.*\/result\/.*/, { timeout });
    
        const resultsUrl = page.url();
        const testIdMatch = resultsUrl.match(/\/result\/([^\/]+)/);
    
        if (!testIdMatch) {
          throw new Error('Could not extract test ID from results URL');
        }
    
        const testId = testIdMatch[1];
    
        return { testId, resultsUrl };
      } finally {
        await page.close();
      }
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the free tier limit ('Free 300 tests/month'), the immediate vs. waiting behavior ('Returns test ID immediately or waits for full results'), and the automation method ('via browser automation'). It doesn't cover error handling, rate limits beyond the monthly cap, or authentication needs, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and well-structured in three sentences. The first sentence states the core purpose, the second provides important constraints (free tier), and the third explains the key behavioral choice. Every sentence earns its place with zero waste, and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 parameters, no annotations, and no output schema, the description provides adequate but incomplete context. It covers the purpose, constraints, and key behavior but doesn't explain what the output looks like (test ID format, result structure) or potential error conditions. Given the complexity and lack of structured output documentation, there are clear gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. It mentions the waitForResults behavior generally but doesn't elaborate on parameter semantics. The baseline score of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Analyze website performance using WebPageTest via browser automation.' It specifies the verb (analyze), resource (website performance), and method (WebPageTest via browser automation). However, it doesn't explicitly distinguish this tool from its sibling performance tools like 'validate_performance_gtmetrix' or 'validate_performance_pagespeed' beyond mentioning WebPageTest specifically.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context with 'Free 300 tests/month' and mentions the waitForResults behavior, but it doesn't explicitly state when to use this tool versus alternatives like validate_performance_gtmetrix or validate_performance_pagespeed. The guidance is implied rather than explicit, lacking clear when/when-not instructions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cordlesssteve/webby-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server