Skip to main content
Glama

pilot_perf

Measure page load performance timings including DNS, TCP, TTFB, DOM parse, and load metrics for web performance analysis.

Instructions

Get page load performance timings (DNS, TCP, TTFB, DOM parse, load).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The implementation and registration of the 'pilot_perf' MCP tool, which collects and returns performance timings from the browser.
    server.tool(
      'pilot_perf',
      'Get page load performance timings (DNS, TCP, TTFB, DOM parse, load).',
      {},
      async () => {
        await bm.ensureBrowser();
        try {
          const timings = await bm.getPage().evaluate(() => {
            const nav = performance.getEntriesByType('navigation')[0] as PerformanceNavigationTiming;
            if (!nav) return 'No navigation timing data available.';
            return {
              dns: Math.round(nav.domainLookupEnd - nav.domainLookupStart),
              tcp: Math.round(nav.connectEnd - nav.connectStart),
              ssl: Math.round(nav.secureConnectionStart > 0 ? nav.connectEnd - nav.secureConnectionStart : 0),
              ttfb: Math.round(nav.responseStart - nav.requestStart),
              download: Math.round(nav.responseEnd - nav.responseStart),
              domParse: Math.round(nav.domInteractive - nav.responseEnd),
              domReady: Math.round(nav.domContentLoadedEventEnd - nav.startTime),
              load: Math.round(nav.loadEventEnd - nav.startTime),
            };
          });
          if (typeof timings === 'string') return { content: [{ type: 'text' as const, text: timings }] };
          const text = Object.entries(timings).map(([k, v]) => `${k.padEnd(12)} ${v}ms`).join('\n');
          return { content: [{ type: 'text' as const, text }] };
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries full burden. It successfully enumerates what metrics are captured (DNS, TTFB, etc.), but fails to disclose operational behavior: whether this reads current page state or triggers a reload, safety profile (read-only vs destructive), or return data structure given no output schema exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with action-front-loaded structure. Parenthetical metric list adds specificity without verbosity. No redundant or wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a zero-input tool; the specific metric enumeration partially compensates for missing output schema. However, brief mention of return format (object vs array) or prerequisite page state would elevate this to fully complete given no structured output definition exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Baseline score of 4 for zero-parameter tools. Schema is empty object with 100% coverage (trivially), and description appropriately reflects no configuration is needed to 'Get' these timings.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specific verb 'Get' + resource 'page load performance timings', with parenthetical enumeration of specific metrics (DNS, TCP, TTFB, DOM parse, load) that clearly distinguishes this from sibling interaction tools like pilot_click or pilot_screenshot.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied (retrieve performance data), but lacks explicit guidance on sequencing—e.g., whether this requires a page to be already loaded, if it triggers a fresh navigation, or how it differs from pilot_navigate which might also load a page.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TacosyHorchata/Pilot'

If you have feedback or need assistance with the MCP directory API, please join our Discord server