Skip to main content
Glama

pilot_perf

Measure page load performance metrics including DNS, TCP, SSL, TTFB, and DOM parsing to diagnose slow pages and identify bottlenecks.

Instructions

Measure page load performance metrics from the Navigation Timing API. Use when the user wants to diagnose slow page loads, benchmark performance, or identify bottlenecks in DNS lookup, connection, server response, or DOM parsing.

Parameters: (none)

Returns: Table of timing metrics in milliseconds — dns, tcp, ssl, ttfb (time to first byte), download, domParse, domReady, and load.

Errors:

  • "No navigation timing data available": The page has not completed a navigation or was loaded via non-standard means. Navigate to the page first with pilot_navigate, then reload.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function for the 'pilot_perf' tool. It uses the Navigation Timing API to measure page load performance metrics (dns, tcp, ssl, ttfb, download, domParse, domReady, load) and returns them as a formatted table.
      server.tool(
        'pilot_perf',
        `Measure page load performance metrics from the Navigation Timing API.
    Use when the user wants to diagnose slow page loads, benchmark performance, or identify bottlenecks in DNS lookup, connection, server response, or DOM parsing.
    
    Parameters: (none)
    
    Returns: Table of timing metrics in milliseconds — dns, tcp, ssl, ttfb (time to first byte), download, domParse, domReady, and load.
    
    Errors:
    - "No navigation timing data available": The page has not completed a navigation or was loaded via non-standard means. Navigate to the page first with pilot_navigate, then reload.`,
        {},
        async () => {
          await bm.ensureBrowser();
          try {
            const timings = await bm.getPage().evaluate(() => {
              const nav = performance.getEntriesByType('navigation')[0] as PerformanceNavigationTiming;
              if (!nav) return 'No navigation timing data available.';
              return {
                dns: Math.round(nav.domainLookupEnd - nav.domainLookupStart),
                tcp: Math.round(nav.connectEnd - nav.connectStart),
                ssl: Math.round(nav.secureConnectionStart > 0 ? nav.connectEnd - nav.secureConnectionStart : 0),
                ttfb: Math.round(nav.responseStart - nav.requestStart),
                download: Math.round(nav.responseEnd - nav.responseStart),
                domParse: Math.round(nav.domInteractive - nav.responseEnd),
                domReady: Math.round(nav.domContentLoadedEventEnd - nav.startTime),
                load: Math.round(nav.loadEventEnd - nav.startTime),
              };
            });
            if (typeof timings === 'string') return { content: [{ type: 'text' as const, text: timings }] };
            const text = Object.entries(timings).map(([k, v]) => `${k.padEnd(12)} ${v}ms`).join('\n');
            return { content: [{ type: 'text' as const, text }] };
          } catch (err) {
            return { content: [{ type: 'text' as const, text: wrapError(err) }], isError: true };
          }
        }
      );
  • Registration call in registerAllTools: registerInspectionTools(effectiveServer, bm) which registers the pilot_perf tool among other inspection tools.
    registerInspectionTools(effectiveServer, bm);
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full behavioral disclosure. It describes return values and an error condition, but does not explicitly state that the tool is read-only or has no side effects. It implies a need for prior navigation, but overall disclosure is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured into clear paragraphs: purpose/use, parameters (none), returns, and errors. It is concise, front-loaded with the main purpose, and every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no parameters and no output schema, the description covers the return metrics and a common error scenario. It is complete enough for an agent to invoke correctly, though additional context on interpreting the metrics would be welcome but not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and the description correctly states 'Parameters: (none)'. With no parameters, the baseline score is 4, and the description adds clarity by listing the output and error cases, which suffices.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states that the tool measures page load performance metrics from the Navigation Timing API, with clear use cases like diagnosing slow page loads and identifying bottlenecks. It lists the specific metrics returned, leaving no ambiguity about what the tool does.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description tells users when to use the tool (to diagnose performance issues) and provides a critical prerequisite: first navigate and reload the page. It includes an error message guiding users to that prerequisite. However, it does not explicitly mention when not to use it or suggest alternative tools, though the sibling list has no direct competitors.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TacosyHorchata/Pilot'

If you have feedback or need assistance with the MCP directory API, please join our Discord server