Skip to main content
Glama
sealmetrics

Sealmetrics MCP Server

by sealmetrics

get_pages_performance

Retrieve page performance metrics like views and entry pages from Sealmetrics analytics. Filter by content groups, date ranges, traffic sources, and countries to analyze specific website sections.

Instructions

Get page performance metrics including views and entry pages. Can filter by content groups to analyze specific sections of your site (e.g., 'Blog Content', 'Product Catalog', 'Support Pages')

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
account_idNoSealmetrics account ID (optional if SEALMETRICS_ACCOUNT_ID is set)
date_rangeYesDate range: 'yesterday', 'today', 'last_7_days', 'last_30_days', 'this_month', 'last_month', or 'YYYYMMDD,YYYYMMDD'
content_groupingNoFilter by content group name (e.g., 'Blog Content', 'Product Catalog', 'Support Pages', 'Purchase Flow')
utm_sourceNoFilter by traffic source (e.g., 'google', 'facebook')
utm_mediumNoFilter by medium (e.g., 'organic', 'cpc')
countryNoFilter by country code (e.g., 'us', 'es')
show_utmsNoInclude UTM breakdown in results
limitNoMaximum number of results to return (default: 100, max: 1000)
skipNoNumber of results to skip for pagination (default: 0)
auto_paginateNoAutomatically fetch all results across multiple pages (default: false)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions filtering capabilities but doesn't describe key behavioral traits such as whether this is a read-only operation, potential rate limits, authentication requirements (implied by account_id but not stated), or what the output format looks like (e.g., pagination details beyond schema). For a tool with 10 parameters and no annotations, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, stating the core purpose in the first sentence and adding filtering context in the second. Both sentences earn their place by clarifying scope and usage. However, it could be slightly more structured by explicitly separating purpose from filtering details, but it remains efficient with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (10 parameters, no output schema, no annotations), the description is incomplete. It doesn't address behavioral aspects like authentication, rate limits, or output format, which are crucial for a tool with many filtering options. Without annotations or an output schema, the description should do more to compensate, but it falls short, leaving the agent with insufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, providing detailed documentation for all 10 parameters. The description adds minimal value beyond the schema by mentioning content group examples (e.g., 'Blog Content', 'Product Catalog'), which are already covered in the schema's description for 'content_grouping'. It doesn't explain parameter interactions or provide additional context, so the baseline score of 3 is appropriate given the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get page performance metrics including views and entry pages.' It specifies the verb ('Get') and resource ('page performance metrics'), and mentions key metrics like views and entry pages. However, it doesn't explicitly differentiate this tool from sibling tools like 'get_traffic_data' or 'get_funnel_data', which might also involve performance metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some implied usage context by mentioning filtering by content groups (e.g., 'Blog Content', 'Product Catalog'), which suggests it's useful for analyzing specific site sections. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_traffic_data' or 'get_funnel_data', nor does it provide exclusions or prerequisites. The guidance is present but not comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sealmetrics/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server