Skip to main content
Glama

Measure.events Analytics

Server Details

Privacy-first web analytics. Query pageviews, referrers, trends, and AI insights.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Turbo-Puffin/measure-mcp-server
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

6 tools
create_siteBInspect

Create a new site in Measure.events. Returns the site_key needed for tracking.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesHuman-readable site name
domainYesThe site's domain (e.g. example.com)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It adds valuable behavioral context by disclosing the return value (site_key) and its purpose ('needed for tracking'), which is not in the schema. However, lacks mutation details: no mention of idempotency (what happens if domain exists?), permissions required, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. Front-loaded with the action verb. Both sentences earn their place: first declares the operation, second explains the critical return value for downstream usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a simple 2-parameter creation tool. Mentions the return value (site_key) which compensates for missing output schema. Could improve by explicitly stating this is a prerequisite step for the tracking/analytics siblings, but covers the essential functional contract.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both 'name' and 'domain' fully documented in schema). The description mentions neither parameter specifically nor adds validation rules, format constraints, or examples beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Create') and resource ('site') with domain context ('Measure.events'). Distinguishes from sibling read/analytics tools (get_insights, get_site_analytics, etc.) by being the only mutation operation. Could be 5 if it explicitly contrasted with 'updating' or 'deleting' sites, but clearly identifies the creation function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use versus alternatives, prerequisites, or workflow order. While it implies this is for initial setup, it should explicitly state this must be called before track_event (a sibling tool) since it returns the 'site_key needed for tracking'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_insightsAInspect

Get proactive insights and anomaly detection for a site — traffic spikes, drops, new referrers.

ParametersJSON Schema
NameRequiredDescriptionDefault
site_keyYesThe site's tracking ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, description carries full burden. Lists specific detection targets (spikes, drops, referrers) providing behavioral context, but omits operational details: check frequency, lookback window, whether anomalies are real-time, any permissions required, or return format details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Main clause front-loads purpose ('Get proactive insights'), em-dash efficiently enumerates specific detection categories. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter read operation. Clearly defines the tool's scope (anomaly detection for traffic/referrers). Absence of output schema is acceptable given concrete examples in description, though brief mention of return structure would strengthen completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (site_key described as 'tracking ID'), establishing baseline 3. Description contextualizes the parameter by stating insights are 'for a site', linking site_key to the target resource, but adds no additional constraints, format examples, or validation rules beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Get' with specific resource types ('insights', 'anomaly detection') and concrete examples (traffic spikes, drops, new referrers). Implicitly distinguishes from sibling 'get_site_analytics' by emphasizing proactive anomaly detection over standard reporting, though lacks explicit sibling comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The term 'proactive insights' implies a monitoring/diagnostic use case, suggesting when to use the tool (for anomaly detection). However, no explicit guidance on when to prefer this over 'get_site_analytics' or 'get_site_summary', nor any prerequisites or conditions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_site_analyticsBInspect

Get analytics data for a Measure.events site. Returns pageviews, top pages, and referrers for a given period.

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd date YYYY-MM-DD (for custom range)
fromNoStart date YYYY-MM-DD (for custom range)
periodNoTime period (default: month)
site_keyYesThe site's tracking ID (site_key)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses return values (pageviews, top pages, referrers) which substitutes for missing output schema, but omits operational details: authentication requirements, rate limits, data retention limits, pagination behavior for large datasets, or error cases (e.g., invalid site_key).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficiently structured sentences with zero waste: first establishes operation and domain, second specifies return payload. Information is front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage and no output schema, the description adequately compensates by listing return fields. However, for a read-only analytics tool with multiple sibling query tools, it lacks guidance on selecting between analytics/summary/insights and omits operational constraints that would complete the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. The description adds minimal semantic context ('for a given period') loosely referencing the date parameters, but doesn't elaborate on parameter relationships (e.g., that from/to implies custom range vs period) beyond what's in the schema property descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' with clear resource 'analytics data' and specifies the entity 'Measure.events site'. It distinguishes return content (pageviews, top pages, referrers) which implicitly differentiates from siblings like 'get_insights' and 'get_site_summary', though explicit sibling comparison is absent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings (get_insights, get_site_summary) or prerequisites like authentication. The description only implies usage through return value documentation, lacking explicit when/when-not directives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_site_summaryBInspect

Get a natural language summary of site analytics for the last 7 days, including week-over-week trends.

ParametersJSON Schema
NameRequiredDescriptionDefault
site_keyYesThe site's tracking ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully conveys the time window constraint (7 days) and content format (natural language, trends), but omits operational details like whether this is read-only, caching behavior, or error conditions for invalid site keys.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficiently structured with the action verb first, followed by specific deliverables (natural language summary, timeframe, trends). Zero wasted words—every phrase conveys essential behavioral or scope information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with complete schema coverage, the description adequately covers the core functionality. However, it lacks differentiation from sibling analytics tools and omits output format details, error handling, or timezone considerations for the 7-day window.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with the site_key parameter fully documented as 'The site's tracking ID'. The description does not add parameter-specific guidance beyond what the schema provides, warranting the baseline score of 3 for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action ('Get'), resource ('natural language summary of site analytics'), and scope ('last 7 days', 'week-over-week trends'). It implies differentiation from siblings through the 'natural language' framing (likely contrasting with raw data tools), though it could explicitly name sibling alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus siblings like 'get_site_analytics' or 'get_insights'. It does not specify prerequisites (e.g., site must exist via create_site) or when users might prefer the raw analytics endpoint instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

log_checkinBInspect

Log agent activity to a site's analytics. Use this to record deployments, content updates, or any agent action on the site.

ParametersJSON Schema
NameRequiredDescriptionDefault
noteNoAdditional context
agentNoAgent name
actionYesWhat the agent did (e.g. 'published article')
site_keyYesThe site's tracking ID
timestampNoISO8601 timestamp (defaults to now)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. States the write destination ('analytics') but omits write behavior details: no mention of idempotency, success/failure signals, rate limits, or whether logged data can be modified later.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences. First establishes purpose, second provides usage examples. No redundancy or filler; every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a logging tool with 100% schema coverage, but gaps remain: no output schema explanation, no mention of side effects or confirmation behavior, and null title leaves missing context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. Description reinforces semantics by giving examples of 'action' values (deployments, content updates) and contextualizing 'agent', but doesn't add formatting details beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Log') and resource ('agent activity to a site's analytics'). Examples ('deployments, content updates') help scope the tool, though it doesn't explicitly distinguish from sibling 'track_event'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides positive guidance ('Use this to record...') with concrete examples of agent actions. Lacks explicit when-NOT-to-use guidance or comparison to 'track_event' alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

track_eventAInspect

Track a pageview or event on a site. Use this to instrument agent-driven page visits or actions.

ParametersJSON Schema
NameRequiredDescriptionDefault
pathYesPage path (e.g. /about)
agentNoAgent name for attribution
referrerNoReferring URL
site_keyYesThe site's tracking ID
utm_mediumNo
utm_sourceNo
utm_campaignNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to specify whether this operation is synchronous, idempotent, requires specific auth scopes, or what happens to the tracked data (e.g., stored, forwarded to third party). Only the basic action is stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally concise with two front-loaded sentences: the first establishes the core function, the second provides usage context. No redundancy or wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for basic use but incomplete given the complexity: it doesn't compensate for the three undocumented UTM parameters, lacks behavioral details expected for a data-collection tool without annotations, and provides no indication of success/failure behavior despite no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 57% schema coverage (4 of 7 parameters documented), the baseline score is 3. The description provides no additional semantic information about the three undescribed UTM parameters or specific format requirements beyond what the schema already states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb ('track') and resource ('pageview or event on a site'), effectively distinguishing this write/tracking operation from sibling read operations like get_insights and get_site_analytics. However, it doesn't explicitly differentiate from log_checkin, which may also involve logging events.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The second sentence provides explicit positive guidance ('Use this to instrument agent-driven page visits or actions'), clarifying the specific context for usage. It lacks explicit 'when-not-to-use' guidance or named alternatives, falling short of a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.