Skip to main content
Glama

Server Details

RUM platform for web performance analytics, Core Web Vitals, and third-party script monitoring.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

10 tools
get_cwv_element_breakdownBInspect

Gets a breakdown of the biggest LCP, INP, or CLS elements for a specific page, ranked by popularity or performance severity.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysToNoDate to look to, default to yesterday's date, in dd-mm-yyyy format.
domainYesDomain to get the element breakdown for.
rankByNoThe criteria for ranking. Options are 'Popularity' (sorts by sessions affected) or 'Severity' (sorts by metric score). Defaults to 'Severity'.Severity
countryNoCountry to filter on, in 2-letter ISO code format (e.g., US, GB). Defaults to All.All
groupedNoGroup data by base url withou query string or anchor
pageUrlYesThe page path to analyze (e.g., /checkout).
daysBackNoDate to look from, default to 8 days ago, in dd-mm-yyyy format.
pageSizeNoNumber of elements to return. Defaults to 10.
directionNoThe sorting direction. Options are 'Worst' or 'Best'. Defaults to 'Worst'.Worst
pageGroupNoFilter by page group/category name. Defaults to All.All
deviceTypeNoFilter by device type: All, Desktop, Mobile, Other.All
metricTypeYesThe metric type to analyze: LCP, INP, or CLS.
pageNumberNoPage number for pagination. Defaults to 1.
outputMaxLengthNoThe maximum number of characters for the output. Defaults to 5000
statisticMeasureNoStatistical measure to display and sort by when ranking by severity: P50, P75, P90, Average. Defaults to P75.P75
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It clarifies the ranking options (popularity vs severity) and implies this is a read operation ('Gets'), but omits critical operational details such as rate limits, data freshness/caching, required permissions, or what constitutes an 'element' in the output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence that efficiently packs in the resource type (element breakdown), specific metrics (LCP/INP/CLS), and ranking options. No words are wasted, though the lack of a second sentence for usage context or output description slightly limits its utility.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 15 parameters, no output schema, and no annotations, the description is insufficient. It fails to describe what the output contains (e.g., element selectors, scores, impact values), how pagination behaves with pageSize/pageNumber, or what the 'grouped' parameter does to results. For this complexity level, the single-sentence description leaves significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds minimal semantic value beyond the schema, essentially summarizing the metricType and rankBy parameters without clarifying syntax details, format constraints, or advanced filtering interactions between the 15 available parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves element-level breakdowns for specific Core Web Vitals metrics (LCP, INP, CLS) and mentions ranking criteria. It distinguishes from sibling tools like get_page_performance_breakdown by focusing on specific DOM elements rather than general page metrics, though it could more explicitly contrast with these alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., when to use get_page_performance_breakdown instead), no prerequisites (e.g., domain verification requirements), and no warning that it requires a specific pageUrl and metricType to function.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_domain_performance_overviewCInspect

Gets domain performance overview with formatted output for Core Web Vitals metrics.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoOptional page path to filter by, e.g., /checkout. Defaults to 'origin' (all pages).origin
daysToNoDate to look to, default to yesterday's date, dd-mm-yyyy or dd-mm-yyyy hh:mm format
domainYesDomain to get the performance overview for
metricNoMetric to look at: All, LCP, INP, CLS, TTFB. Defaults to All.All
countryNoCountry to filter on, in 2 letter ISO code format. Defaults to All.All
daysBackNoDate to look from, default to 8 days ago, dd-mm-yyyy or dd-mm-yyyy hh:mm format
pageGroupNoFilter by page group/category name. Defaults to All.All
deviceTypeNoFilter by device type: All, Desktop, Mobile, Other.All
granularityNoGranularity for grouping data. Options: 1m, 5m, 15m, 30m, 1h, 3h, 6h, 12h, 24h. Default is 24h.24h
outputMaxLengthNoThe maximum number of characters for the output. Defaults to 5000
statisticMeasureNoStatistical measure: P50, P75, P90, Average. Defaults to P75.P75
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. Only adds 'formatted output' beyond the tool name. Fails to explain time-series aggregation behavior implied by granularity/daysBack parameters, whether data is cached/real-time, or output structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 11 words with no waste. Front-loaded with verb and object. Efficient structure, though arguably too brief given the tool's 11-parameter complexity and lack of annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for tool complexity: 11 parameters with time-filtering, device/country segmentation, and statistical measures. No output schema or annotations to compensate. Description should explain the overview vs. breakdown distinction and typical analytics workflows.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline meaning for all 11 parameters. Description mentions 'Core Web Vitals' which maps to metric parameter options, but does not add syntax details or explain relationships between date parameters beyond schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Gets'), resource ('domain performance overview'), and scope ('Core Web Vitals metrics') are clearly stated. However, it lacks explicit differentiation from sibling tools like get_page_performance_breakdown.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this domain-level overview versus page-level breakdowns (get_page_performance_breakdown) or element-level analysis (get_cwv_element_breakdown). No prerequisites or alternatives mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_domains_listBInspect

Gets a list of all domains registered for the organization.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden of behavioral disclosure. While it specifies scope ('all domains registered for the organization'), it lacks critical behavioral context such as pagination behavior, caching policies, data freshness, or what constitutes a 'registered' domain. It does not describe the return structure when no domains exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of nine words with clear subject-verb-object structure. Information is front-loaded with the action 'Gets' immediately followed by the resource. No redundant or filler text present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has zero parameters and no output schema. Description adequately identifies the resource retrieved, but lacks guidance on how the output relates to sibling tools (e.g., that domain identifiers from this output are required for get_domain_performance_overview). Given the simplicity, it meets minimum viability but leaves gaps regarding output structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. Per scoring rules, zero-parameter tools receive a baseline score of 4. The description appropriately confirms the parameter-less nature by specifying 'all domains' without qualification, implying no filtering parameters are needed or accepted.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Gets') + resource ('domains') + scope qualifier ('all...registered for the organization'). However, it does not explicitly differentiate from sibling get_domain_performance_overview, which also interacts with domains but presumably requires a specific domain parameter rather than listing all available ones.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use guidance, prerequisites, or alternative tools mentioned. While the zero-parameter nature implicitly suggests this is an entry-point/discovery tool to be called before domain-specific analysis tools, this relationship is not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_page_groupsCInspect

Gets page groups (page categories) configured for a domain.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain to retrieve page groups for.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure, yet it only states the basic retrieval action. It omits whether results are cached, typical response size, authentication requirements, or whether this operation counts against API quotas—critical context for a data retrieval tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is appropriately front-loaded with the verb, and the parenthetical clarification is efficient. However, given the lack of annotations and output schema, the extreme brevity feels like under-specification rather than optimal conciseness, as critical behavioral context is missing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with only one required parameter and simple schema, the description adequately covers the input side. However, with no output schema provided, the description should at least hint at the return structure (e.g., 'returns a list of configured group names') to be considered complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage ('Domain to retrieve page groups for'), establishing a baseline of 3. The description mentions 'for a domain' but adds no semantic context beyond the schema—no format requirements, constraints, or examples for the domain parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Gets'), resource ('page groups'), and scope ('configured for a domain'). The parenthetical clarification '(page categories)' effectively distinguishes this tool from performance-oriented siblings like get_page_performance_breakdown. However, it lacks explicit differentiation regarding when to choose this over get_domain_performance_overview.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. Given siblings include multiple page and domain analysis tools (get_page_performance_breakdown, get_domain_performance_overview), explicit criteria for selecting this categorical retrieval over performance metrics would be necessary for a higher score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_page_performance_breakdownCInspect

Gets a page-by-page performance breakdown with formatted output for Core Web Vitals metrics. Returns a human-readable list of pages ranked by performance or popularity.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoOptional page path to filter by, e.g., /checkout. Defaults to 'origin' (all pages).origin
daysToNoDate to look to, default to yesterdays date, dd-mm-yyyy or dd-mm-yyyy hh:mm format
domainYesDomain to get the performance overview for
metricNoMetric to look at, possible options are All, LCP, INP, CLS, TTFB. This also determines the metric for 'severity' ranking.All
rankByNoThe criteria for ranking pages. Options are 'Popularity' or 'Severity'. Defaults to 'Popularity'.Popularity
countryNoCountry to filter on. Defaults to All OR one country, in 2-letter ISO code format, eg US, GB, FR.All
groupedNoGroup data by base url withou query string or anchor
daysBackNoDate to look from, default to 8 days ago, dd-mm-yyyy or dd-mm-yyyy hh:mm format
pageSizeNoThe number of pages to include in the result set. Defaults to 10.
directionNoThe sorting direction. Options are 'Worst' or 'Best'. Defaults to 'Worst'.Worst
pageGroupNoFilter by page group/category name. Defaults to All.All
deviceTypeNoFilter by device type: All, Desktop, Mobile, Other.All
pageNumberNoThe page number of the result set to retrieve. Defaults to 1.
outputMaxLengthNoThe maximum number of characters for the output. Defaults to 5000
statisticMeasureNoStatistical measure, options are P50, P75, P90, Average. Defaults to P75.P75
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It mentions 'formatted output' and 'human-readable' (useful for display expectations), but omits critical operational details: data freshness/retention (dates default to 8 days ago), pagination behavior, error handling for invalid domains, or whether results are cached. For a 15-parameter tool with complex filtering capabilities, this is insufficient behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero redundancy. First sentence defines the operation and output format; second describes the ranking behavior. However, given the tool's complexity (15 parameters), the description may be overly terse—sacrificing completeness for brevity. No filler words, but could front-load more salient differentiators from sibling tools.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 15 parameters spanning pagination (pageNumber, pageSize), date ranges (daysBack, daysTo), multi-dimensional filtering (country, deviceType, pageGroup), and statistical measures, the description is inadequate. It makes no mention of temporal data aggregation, pagination limits, or the fact that it supports filtering across multiple dimensions simultaneously. Without an output schema, the description should have described the return structure or at least acknowledged the filtering capabilities.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema adequately documents all 15 parameters including date formats, metric options (LCP, INP, etc.), and ranking criteria. The description adds semantic context by linking these to 'Core Web Vitals metrics' and mentions 'ranked by performance or popularity,' but doesn't expand on parameter interactions (e.g., how 'page' filtering interacts with 'grouped') or provide syntax examples beyond the schema. Baseline 3 is appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Gets') and resource ('page-by-page performance breakdown'). Mentions 'Core Web Vitals metrics' and 'ranked by performance or popularity' which adds specificity. However, it doesn't distinguish from siblings like 'get_page_performance_breakdown_by_browser' or 'get_page_performance_breakdown_by_country'—leaving ambiguity about when to use this aggregate version versus the dimension-specific ones.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like the domain overview or browser/country-specific breakdowns. While it mentions that results can be ranked by performance or popularity, it doesn't clarify prerequisites (e.g., needing a valid domain) or suggest which metric/filter combinations are most useful for common use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_page_performance_breakdown_by_browserCInspect

Gets an overview of performance metrics broken down by browser.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoOptional page path to filter by, e.g., /checkout. Defaults to 'origin' (all pages).origin
daysToNoDate to look to, default to yesterdays date, dd-mm-yyyy or dd-mm-yyyy hh:mm format
domainYesDomain to get the performance overview for
metricNoMetric to look at, possible options are All, LCP, INP, CLS, TTFB. This also determines the metric for 'severity' ranking.All
rankByNoThe criteria for ranking. Options are 'Popularity' or 'Severity'. Defaults to 'Popularity'.Popularity
daysBackNoDate to look from, default to 8 days ago, dd-mm-yyyy or dd-mm-yyyy hh:mm format
pageSizeNoThe number of items to include in the result set. Defaults to 10.
directionNoThe sorting direction. Options are 'Worst' or 'Best'. Defaults to 'Worst'.Worst
pageGroupNoFilter by page group/category name. Defaults to All.All
deviceTypeNoFilter by device type: All, Desktop, Mobile, Other.All
pageNumberNoThe page number of the result set to retrieve. Defaults to 1.
outputMaxLengthNoThe maximum number of characters for the output. Defaults to 5000
statisticMeasureNoStatistical measure, options are P50, P75, P90, Average. Defaults to P75.P75
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden but provides minimal behavioral context. It doesn't indicate whether this is read-only (implied by 'gets' but not explicit), what the output structure looks like, or how the 'severity' ranking (referenced in the metric parameter) is calculated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single-sentence description is front-loaded with a clear verb and contains no wasted words. However, at only 9 words for a 13-parameter tool with complex filtering capabilities (date ranges, device types, pagination), it borders on underspecification rather than optimal conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich parameter set (13 params including time ranges, device filters, page groups, and statistical measures), the description is incomplete. It mentions none of these filtering capabilities and provides no hint about output format since no output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing a baseline of 3. The description adds no parameter-specific context beyond the implicit mention of 'browser' as a dimension, nor does it explain parameter interactions (e.g., how daysBack/daysTo work with the default '8 days ago' behavior).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves performance metrics with a browser-based breakdown, distinguishing it from siblings like 'get_page_performance_breakdown_by_country'. However, it uses the generic verb 'Gets' and doesn't specify that these are Core Web Vitals (LCP, INP, CLS, etc.) despite the schema revealing these specific metric types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to prefer this browser-specific breakdown versus the generic breakdown or country-based breakdown. No mention of prerequisites (beyond the required domain) or common use cases for browser-level analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_page_performance_breakdown_by_countryCInspect

Gets an overview of performance metrics broken down by country.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoOptional page path to filter by, e.g., /checkout. Defaults to 'origin' (all pages).origin
daysToNoDate to look to, default to yesterdays date, dd-mm-yyyy or dd-mm-yyyy hh:mm format
domainYesDomain to get the performance overview for
metricNoMetric to look at, possible options are All, LCP, INP, CLS, TTFB. This also determines the metric for 'severity' ranking.All
rankByNoThe criteria for ranking. Options are 'Popularity' or 'Severity'. Defaults to 'Popularity'.Popularity
daysBackNoDate to look from, default to 8 days ago, dd-mm-yyyy or dd-mm-yyyy hh:mm format
pageSizeNoThe number of items to include in the result set. Defaults to 10.
directionNoThe sorting direction. Options are 'Worst' or 'Best'. Defaults to 'Worst'.Worst
pageGroupNoFilter by page group/category name. Defaults to All.All
deviceTypeNoFilter by device type: All, Desktop, Mobile, Other.All
pageNumberNoThe page number of the result set to retrieve. Defaults to 1.
outputMaxLengthNoThe maximum number of characters for the output. Defaults to 5000
statisticMeasureNoStatistical measure, options are P50, P75, P90, Average. Defaults to P75.P75
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden of behavioral disclosure. While 'Gets' implies read-only access, the description omits critical behavioral context: the ranking/severity system (despite rankBy/direction parameters), pagination behavior, what constitutes 'performance metrics' (Web Vitals), and output format constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficient without redundant phrasing, but is undersized for a tool with 13 parameters and complex filtering capabilities. The material is front-loaded (verb first), yet lacks supporting sentences warranted by the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Insufficient for a feature-rich tool with date ranges, pagination, statistical measures (P75, P90), device filtering, and severity ranking. Missing explanation of the 'severity' concept, the relationship between metric and ranking, and how results are aggregated.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, meeting baseline expectations. The description adds no supplemental parameter guidance (e.g., date format conventions, metric abbreviations like INP/CLS), relying entirely on schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Gets') and scope ('performance metrics broken down by country'), identifying the geographic dimension. However, it does not explicitly differentiate from sibling tools like 'get_page_performance_breakdown' or 'get_page_performance_breakdown_by_browser'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use the country-specific breakdown versus other breakdown tools (e.g., by browser or aggregate), nor does it mention prerequisites like domain validation requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_script_performance_reportBInspect

Gets a report on script performance, including average load times and impact on pages.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysToNoDate to look to, default to yesterdays date, dd-mm-yyyy or dd-mm-yyyy hh:mm format
domainYesDomain to get the script report for.
rankByNoThe criteria for ranking pages. Options are 'Popularity' or 'Severity'. Defaults to 'Popularity'.Popularity
countryNoCountry to filter on. Defaults to All OR one country, in 2-letter ISO code format, eg US, GB, FR.All
daysBackNoDate to look from, default to 8 days ago, dd-mm-yyyy or dd-mm-yyyy hh:mm format
pageSizeNoNumber of scripts to return. Defaults to 10.
directionNoThe sorting direction. Options are 'Worst' or 'Best'. Defaults to 'Worst'.Worst
pageGroupNoFilter by page group/category name. Defaults to All.All
deviceTypeNoFilter by device type: All, Desktop, Mobile, Other.All
pageNumberNoPage number for pagination. Defaults to 1.
scriptTypeNoFilter by script type: All, Internal, or ThirdParty. Defaults to All.All
searchTextNoOptional text to search for in script URL or vendor name.
outputMaxLengthNoThe maximum number of characters for the output. Defaults to 5000
statisticMeasureNoStatistical measure: P50, P75, P90, Average. Defaults to P75.P75
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It partially discloses behavioral traits by specifying report contents ('average load times and impact on pages'), hinting at the output structure. However, it fails to state the read-only safety profile, pagination behavior implied by the parameters, or what 'impact on pages' specifically measures.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single-sentence description is efficiently worded with no redundancy, placing the core action ('Gets') at the front. However, given the high complexity (14 parameters) and lack of output schema, the extreme brevity leaves significant gaps rather than demonstrating disciplined omission.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 14 filtering parameters and no output schema, the description minimally satisfies requirements byhinting at return values ('load times', 'impact'). However, it lacks crucial context for a complex analytics tool: no explanation of the report format, whether results are real-time or cached, or how 'impact' is calculated.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add parameter-specific semantics (e.g., explaining that 'statisticMeasure' controls the aggregation method for the 'average load times' mentioned, or that 'direction' sorts by performance severity). It relies entirely on the schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Gets') and resource ('report on script performance'), and clarifies report contents ('average load times and impact on pages'). However, it does not explicitly differentiate from sibling tool 'get_script_tag', which likely retrieves individual script metadata rather than aggregated performance reports.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings like 'get_page_performance_breakdown' or 'get_domain_performance_overview'. No mention of prerequisites (e.g., requiring domain verification) or data freshness considerations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_script_tagCInspect

Gets the script tag for the organization and domain.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain to get the script tag for.
sampleRateNoSample rate percentage (1-100). Set below 100 to collect data from fewer sessions. Defaults to 100.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. While 'gets' implies read-only, it omits crucial behavioral details: return format (HTML vs. JS vs. URL), whether the tag is generated dynamically, caching implications, or that sampleRate affects data collection granularity (only mentioned in parameter description).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is appropriately sized and front-loaded with the core action. While minimal, it contains no waste words or redundant clauses.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a tool returning installation code (implied by 'script tag' and sampleRate param). With no output schema and no annotations, the description should specify the return content (JS snippet, HTML tag, or URL) and confirm this is for performance monitoring/analytics (suggested by siblings but not stated).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for both domain and sampleRate. Main description mentions 'organization and domain' which aligns with the domain parameter and implied organizational context, but adds no syntax or format details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the basic action (gets) and resource (script tag) but uses ambiguous terminology ('script tag' could imply HTML, JS snippet, or metadata) and fails to distinguish from sibling tool 'get_script_performance_report' or clarify this is likely a RUM/monitoring installation snippet.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains no guidance on when to use this versus alternatives like 'get_script_performance_report' (which analyzes scripts vs. this tool that likely retrieves one). No mention of prerequisites or typical workflow context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_users_listBInspect

Gets a list of all users belonging to the organization.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. States the operation is a read ('Gets') but omits critical behavioral details: pagination behavior, permission requirements, whether results are cached, what user attributes are returned, or if this includes service accounts/external users.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundancy. Main information front-loaded. Appropriate length for a zero-parameter utility function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Minimum viable for a simple list getter. Adequate given low complexity (0 params, no nested objects), but gaps remain: no output schema referenced, no description of user object fields, and no behavioral constraints documented despite lack of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters per input schema. Per evaluation rules, 0 parameters warrants a baseline score of 4. The description confirms no filtering is possible ('all users'), which aligns with the empty parameter schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly identifies the resource (users) and scope (all users belonging to the organization) with a specific verb. Implicitly distinguishes from performance-focused siblings (get_cwv_element_breakdown, get_domain_performance_overview, etc.) by targeting user directory data rather than metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives, prerequisites such as admin permissions, or whether this retrieves active users only versus deactivated/archived users. No mention of pagination or filtering limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources