Skip to main content
Glama

get_performance_metrics

Capture runtime performance metrics like JS heap size and DOM node count from Chrome tabs to monitor memory usage, detect leaks, and profile performance.

Instructions

Captures runtime performance metrics including JS heap size, DOM node count, and layout timing. Side effects: none (read-only snapshot). Prerequisites: requires an active Chrome tab. Returns: JSON object mapping metric names to numeric values (e.g., JSHeapUsedSize, LayoutCount). Use this to monitor memory usage, detect memory leaks, or profile performance. Alternatives: 'profile_page_performance' for detailed tracing, browser DevTools Performance tab.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: 'read-only snapshot' indicates non-destructive behavior, 'requires an active Chrome tab' specifies prerequisites, and 'Returns: JSON object mapping metric names to numeric values' outlines the output format. However, it lacks details on potential rate limits or error conditions, which could be relevant for runtime tools.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by structured details (side effects, prerequisites, returns, usage, alternatives). Every sentence adds value without redundancy, making it efficient and well-organized for quick comprehension by an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (performance monitoring with no parameters) and lack of annotations/output schema, the description is largely complete: it covers purpose, behavior, prerequisites, return format, and alternatives. However, it could benefit from mentioning any limitations (e.g., Chrome-specific, snapshot timing) or error handling, slightly reducing completeness for a tool in a browser automation context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so the baseline is 4. The description adds no parameter-specific information, which is appropriate since there are no parameters to document. It focuses on the tool's function and output instead, which is sufficient given the parameterless design.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('captures runtime performance metrics') and resources ('JS heap size, DOM node count, layout timing'), distinguishing it from siblings like 'profile_page_performance' for detailed tracing. It explicitly identifies what metrics are captured, making the purpose unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('to monitor memory usage, detect memory leaks, or profile performance') and when to use alternatives ('profile_page_performance' for detailed tracing, browser DevTools Performance tab). It also specifies prerequisites ('requires an active Chrome tab'), offering clear context for usage decisions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/raultov/chrome-debug-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server