Skip to main content
Glama

chrome_main_thread_hotspots

List Chrome main-thread tasks sorted by wall duration to find responsiveness bottlenecks. Compare CPU and wall times to pinpoint hot tasks during scroll or load.

Instructions

Top Chrome main-thread tasks by wall duration: id, name, task_type, thread_name, process_name, dur_ms, cpu_pct (thread_dur/dur), thread_dur_ms. Uses chrome.tasks and thread.is_main_thread = 1 (tid == pid per Linux convention).

Use when: investigating main-thread responsiveness, finding hot tasks during scroll/load, comparing CPU vs wall time, scoping to one renderer in multi-renderer traces.

Don't use for: non-Chrome traces (will error). For background (non-main) thread tasks, drop to execute_sql against chrome.tasks directly.

Parameters (all optional):

  • process_name / pid / upid: scope to one process or process type. process_name='Renderer' shows all renderers together; pid is the OS pid (visible in Task Manager but can be recycled mid-trace); upid is the trace-internal unique pid (always precise — prefer over pid for multi-renderer traces). Look up both via list_processes. All AND when set; redundant pairings (e.g. matching upid + pid) are harmless.

  • min_dur_ms: minimum task duration. Defaults to 16 (one 60 Hz frame). Pass 0 for ALL tasks; raise to 33 (30 Hz) or 100 to focus on bigger stutters.

  • limit: max rows (default 100, capped at 5000). Must be > 0 if set.

Empty result: either no main-thread tasks exceeded min_dur_ms (good performance at that threshold), or thread metadata is incomplete (is_main_thread is NULL). If the latter is suspected, retry with execute_sql filtering on thread_name IN ('CrBrowserMain', 'CrRendererMain') to bypass the is_main_thread filter.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
limitNoOptional max rows to return. Defaults to 100 and is capped at 5000 to match `execute_sql`. Lower values keep responses short; higher values surface long tails of mid-duration tasks. Accepts both numbers and numeric strings.
min_dur_msNoOptional minimum task duration in milliseconds. Defaults to 16 ms (one 60 Hz frame budget). Pass 0 to see ALL main-thread tasks; raise to e.g. 33 (30 Hz) or 100 to focus on the worst stutters. Must be a finite non-negative number. Accepts both numbers and numeric strings.
pidNoOptional pid filter — the OS-level process ID (visible in Task Manager). Get pid from `list_processes`. ANDs with the other filters when set. Note: pids can be recycled within a long trace; prefer `upid` when precision matters. Accepts both numbers and numeric strings.
process_nameNoOptional process-name filter (e.g. "Renderer", "Browser", "GPU Process"). Useful to scope to one process type without picking a specific instance.
upidNoOptional upid filter — the trace-internal Unique Process ID assigned by trace_processor (also from `list_processes`). Always uniquely identifies one process within a trace, even if the OS recycled its pid. Use this to disambiguate same-named or pid-recycled processes; ANDs with the other filters when set. Accepts both numbers and numeric strings.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavioral traits: it explains default thresholds (16 ms), cap (5000 rows), nuances of pid vs upid, and how to interpret empty results. It also warns about pid recycling and metadata incompleteness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-organized with clear sections (output, usage, parameters). Every sentence adds unique value; no fluff. It is appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 optional parameters and no output schema, the description is exceptionally complete. It covers output columns, use cases, parameter details, error conditions, and fallback strategies. The agent has enough information to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds substantial value beyond the schema. For example, it explains the default and practical thresholds for `min_dur_ms`, when to prefer `upid` over `pid`, and the default/cap for `limit`. This helps the agent choose parameters wisely.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns top Chrome main-thread tasks by wall duration, listing specific columns. It distinguishes from siblings like `execute_sql` and `list_processes`, and explicitly mentions use cases such as investigating responsiveness during scroll/load.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: 'Use when...' and 'Don't use for...', including when to fall back to `execute_sql`. It also addresses empty results and suggests alternative approaches for incomplete metadata.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tooluse-labs/perfetto-mcp-rs'

If you have feedback or need assistance with the MCP directory API, please join our Discord server