Skip to main content
Glama
clamp-sh

Clamp Analytics MCP Server

Official

pages.engagement

Read-only

Retrieve per-page engagement metrics including average engagement seconds and bounce rate to identify pages that hold attention or drive bounces.

Instructions

Per-page metrics with a selectable detail level. The view parameter chooses what comes back:

  • view="summary" (default): pathname, pageviews, visitors. Cheap; use as the standard "top pages" call.

  • view="engagement": adds avg_engagement_seconds (active tab time from the SDK's pageview_end beacon) and bounce_rate (% of single-page sessions that started on this path). Use to answer "which pages hold attention" or "which pages bounce".

  • view="sections": returns per-section view counts for the specified pathname. Requires pathname to be set. Each section is a data-clamp-section element on that page, counted once per session when at least 40% scrolls into view. Use to answer "which parts of /pricing get seen" or "is the FAQ being read".

Examples:

  • "top pages this week" → view omitted (or "summary"), period="7d"

  • "which pages bounce hardest" → view="engagement", then sort by bounce_rate

  • "how far down /pricing do people scroll" → view="sections", pathname="/pricing"

Limitations: avg_engagement_seconds is null for pages without pageview_end data (SDK <0.3 or pages closed during navigation). view="sections" requires the section-views SDK extension installed (see /docs/sdk/extensions/section-views) and only counts elements with the data-clamp-section attribute — pages with no instrumented sections return []. view="sections" without pathname returns 400.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idNoTarget project ID (e.g. "proj_abc123"). Required when the credential has access to multiple projects. If omitted and only one project is accessible, that project is used automatically. Call `projects.list` to discover available project IDs.
periodNoTime period. Use "today", "yesterday", "7d", "30d", "90d", or a custom range as "YYYY-MM-DD:YYYY-MM-DD" (e.g. "2026-01-01:2026-03-31"). Defaults to "30d".
limitNoMax rows to return (1-50). Defaults to 10.
viewNoDetail level. "summary" (default) returns pathname/pageviews/visitors only. "engagement" adds avg_engagement_seconds and bounce_rate. "sections" returns per-section view counts for the pathname (requires pathname).
pathnameNoFilter to a specific page path (e.g. "/pricing", "/blog/my-post"). Must start with /.
utm_sourceNoFilter by UTM source (e.g. "google", "twitter", "newsletter"). Case-sensitive, must match the value in the tracking URL.
utm_mediumNoFilter by UTM medium (e.g. "cpc", "email", "social"). Case-sensitive.
utm_campaignNoFilter by UTM campaign name (e.g. "spring-launch", "product-hunt"). Case-sensitive.
utm_contentNoFilter by UTM content (e.g. "hero-cta", "sidebar-banner"). Case-sensitive.
utm_termNoFilter by UTM term (e.g. "running+shoes"). Case-sensitive.
referrer_hostNoFilter by referrer hostname (e.g. "news.ycombinator.com", "twitter.com", "github.com"). Use this to see what traffic from a specific source did. Must match the value returned by `traffic.breakdown(dimension="referrer_host")` exactly (lowercase, no protocol or path).
countryNoISO 3166-1 alpha-2 country code, uppercase (e.g. "US", "GB", "DE", "NL", "JP"). Filter results to visitors from this country.
device_typeNoDevice category. One of: "desktop", "mobile", "tablet".
channelNoTraffic channel. One of: "direct", "organic_search", "organic_social", "paid", "email", "referral".

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
pagesNo
sectionsNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses all behavioral traits beyond annotations: default view='summary', default period='30d', limitations on avg_engagement_seconds (null when no pageview_end data), sections returning empty array if no instrumented elements, and 400 error if sections without pathname. Annotations (readOnlyHint=true) are consistent with read-only nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections, bullet points for view details, and inline examples. Information is front-loaded with purpose, then detail. Every sentence adds value; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (14 parameters, multiple views, output schema present), the description covers all needed aspects: view behaviors, limitations, parameter dependencies, and usage examples. No gaps; output schema handles return value documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, but the description adds significant value by explaining parameter interactions (e.g., view values and their effects, pathname requirement for sections, default values). Provides examples of parameter usage beyond the schema's basic descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns per-page metrics with selectable detail levels, distinguishing three views (summary, engagement, sections) with specific purposes and outputs. It uses specific verbs and resources, and differentiates from sibling tools implicitly through its focused functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance for each view (e.g., 'top pages' → summary, 'which pages bounce' → engagement, 'how far down' → sections) with examples. Also includes when-not-to-use hints (sections requires pathname, limitations like null engagement data). Does not directly compare to siblings but is self-contained.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/clamp-sh/mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server