Skip to main content
Glama

Server Details

Read-only analytics for Convex apps, queryable via MCP from Claude, Cursor, and other clients.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Dan-Cleary/convalytics
GitHub Stars
6

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 15 of 15 tools scored. Lowest: 3.6/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct resource and action (e.g., funnel CRUD, event/pageview queries, composite snapshots). No two tools have overlapping purposes; even similar sounding ones like events_count and pageviews_count are clearly differentiated by description.

Naming Consistency5/5

Tool names consistently follow a verb_noun pattern (compute_funnel, create_funnel, list_funnels, top_pages, etc.) with only minor, clear deviations like recent_events. Naming conventions are uniform and predictable.

Tool Count5/5

15 tools is well-scoped for an analytics server. Each tool has a clear purpose and contributes to a comprehensive but focused set, avoiding bloat or insufficiency.

Completeness5/5

The tool surface covers full funnel lifecycle (CRUD + compute), event/pageview analytics (counts, top items, recent events), composite views (user_activity, weekly_digest), and metadata (usage, projects). No obvious gaps; the domain is well-covered.

Available Tools

15 tools
compute_funnelA
Read-onlyIdempotent
Inspect

Run a funnel over a time window and return per-step visitor count, conversion from previous step, conversion from start, and average time to convert between steps. Returns truncated flags if the scan cap was hit.

ParametersJSON Schema
NameRequiredDescriptionDefault
sinceNoStart of window as unix milliseconds. Defaults to 7 days ago.
untilNoEnd of window as unix milliseconds. Defaults to now.
funnelIdYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, so the tool's safety is clear. The description adds valuable behavioral detail: it returns 'truncated' flags if the scan cap is hit, which is critical for understanding result completeness. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. The first sentence states purpose and outputs; the second adds a behavioral exception (scan cap). Perfectly front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so the description must cover return values. It enumerates the returned metrics (visitor count, conversions, average time, truncated flags). It does not specify format (e.g., array of objects) but is otherwise complete for moderate-complexity tool. The description distinguishes the tool from siblings like get_funnel.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 67% (since and until have descriptions), meeting the high coverage baseline. The description adds context about what the tool computes (metrics) but does not further explain the funnelId parameter beyond the schema. The description's contribution to parameter understanding is limited.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Run a funnel over a time window' and specifies the exact metrics returned (per-step visitor count, conversions, average time). This differentiates it from sibling tools like get_funnel (which likely retrieves funnel config) and events_count (a simpler count).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for computing funnel metrics over a time window but does not explicitly state when to use this tool over alternatives (e.g., get_funnel for retrieving config, create_funnel for creating). No when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_funnelAInspect

Create a new funnel on a project. Steps are 2–10 ordered events or pageview paths. conversionWindowMs caps how long a visitor has between consecutive steps (default 7 days); this is the step-to-step limit, without which a funnel is just event co-occurrence. Returns { id } on success.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesHuman-readable name.
stepsYesOrdered list of 2–10 steps. Visitors must hit them in order.
projectYesProject name (case-insensitive, e.g. 'slopbench') or project id from list_projects.
descriptionNoOptional longer description.
conversionWindowMsNoMax ms between consecutive steps. Default 7 days (604800000). Bounds: 60000 (1 min) to 7776000000 (90 days).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations show it's a write operation (non-read-only) but not destructive. The description adds return value { id } and explains conversionWindowMs semantics. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, no wasted words. Front-loaded with the main action. Every sentence contributes essential detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 5 parameters (all described in schema plus extra context), no output schema needed beyond { id }, and the presence of sibling tools, the description is complete and actionable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, yet the description adds critical meaning: steps must be ordered, conversionWindowMs is step-to-step limit with default 7 days, and without it a funnel is just co-occurrence. This significantly aids an agent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a new funnel on a project, details the nature of steps (ordered events/pageview paths), and mentions conversionWindowMs. This easily distinguishes it from siblings like compute_funnel or delete_funnel.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description covers what the tool does and key parameters, but does not explicitly specify when to use or alternatives. The purpose is clear enough for its role as a creation tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_funnelA
DestructiveIdempotent
Inspect

Soft delete a funnel. The row is retained with status='deleted' and excluded from list/get/compute. Idempotent — calling twice is a no-op. Use if the funnel is obsolete; the record is kept for audit and cannot be undone from MCP.

ParametersJSON Schema
NameRequiredDescriptionDefault
funnelIdYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses soft delete behavior: row retained with status='deleted', excluded from list/get/compute, idempotent, cannot be undone. This adds significant detail beyond annotations (idempotentHint, destructiveHint) and explains the consequences and constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences: action, effect, usage guidance. Every sentence adds value, no redundancy or fluff. Front-loaded with the key point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers behavioral details, idempotency, audit trail, and undo limitation. Does not mention error conditions or prerequisites (e.g., permissions), but for a soft delete with single param and no output schema, it is reasonably complete. Could briefly note that the record is still accessible via some admin API, but not required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only one parameter (funnelId) with no description in schema (0% coverage). The tool description does not explain the parameter meaning beyond the name; it is implicit from context. Adequate for such a simple parameter, but could add 'ID of the funnel to delete' for clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Soft delete a funnel' with verb and resource. Distinguishes from siblings like compute_funnel, get_funnel, list_funnels which are read or compute operations, and update_funnel which is a mutation but not deletion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use if the funnel is obsolete' and notes it 'cannot be undone from MCP', providing when-to-use and a limitation. Does not explicitly mention alternatives, but the context (siblings) and nature of soft delete are clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

events_countA
Read-onlyIdempotent
Inspect

Count CUSTOM PRODUCT events for a specific project in a time window, optionally filtered to one event name and/or one user. Custom events are emitted by explicit analytics.track() calls in app code (signup_completed, payment_succeeded, etc.). This does NOT count page views — use pageviews_count or weekly_digest for those. Returns count, unique visitors, and a truncated flag if the scan hit the maximum scan size.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoOptional event name to filter by (e.g. 'signup_completed'). If omitted, counts all custom events in the window. Do NOT pass 'page_view' here — page views are in a separate table.
userNoOptional. Filter to one visitor/user. Accepts userEmail (case-insensitive) or visitorId (exact). For the full per-user snapshot prefer user_activity.
sinceNoStart of window as unix milliseconds. Defaults to 7 days ago.
untilNoEnd of window as unix milliseconds. Defaults to now.
projectYesProject name (case-insensitive, e.g. 'slopbench') or project id from list_projects.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, so the tool's safety profile is clear. The description adds value by disclosing the 'truncated' flag behavior if scan size limits are hit, and mentions that the tool uses a separate table for page views (implying separate resource access). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four sentences, each adding essential information: purpose, differentiation from page views, optional filtering, and return value details. No fluff, perfectly front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there is no output schema, the description explicitly lists return values (count, unique visitors, truncated flag). The tool has 5 parameters (one required), and the description covers the main use case, exclusions, and behavioral edge cases. Annotations are rich, and context signals indicate no nested objects or enums, so the description is complete for an agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides descriptions for all 5 parameters (100% coverage). The description adds extra semantics: it explains that the 'name' parameter should not be 'page_view' (because page views are in a separate table), and clarifies that 'user' accepts both email or visitor ID. This adds meaningful usage context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Count') and a clear resource ('CUSTOM PRODUCT events') and explicitly distinguishes what it does not count ('page views'). It also relates the resource to custom analytics.track() calls, making the purpose unmistakable and differentiated from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool ('Count custom product events') and when not to use it ('does NOT count page views — use pageviews_count or weekly_digest for those'). It also provides guidance on filtering by event name and user, and mentions alternatives for per-user snapshots (user_activity).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_funnelA
Read-onlyIdempotent
Inspect

Return the full definition of one funnel by id: name, description, ordered steps, and conversion window.

ParametersJSON Schema
NameRequiredDescriptionDefault
funnelIdYesFunnel id from list_funnels or create_funnel.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds value by specifying the exact return fields (name, description, ordered steps, conversion window), going beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 20 words, front-loaded with the core purpose. Every word contributes meaning, with no filler or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input (one parameter), rich annotations, and no output schema, the description sufficiently covers what the tool does and returns. It is complete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for funnelId. The description does not add new information about the parameter beyond what the schema already provides. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the specific verb 'return' and resource 'full definition of one funnel', listing included fields (name, description, ordered steps, conversion window). This clearly distinguishes from siblings like list_funnels (list all) and create_funnel (create).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly notes the id requirement from list_funnels or create_funnel, but does not explicitly state when to use this tool over alternatives like compute_funnel or events_count. No when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_usageA
Read-onlyIdempotent
Inspect

Return the current month's custom-event usage, monthly quota, retention days, and plan name for the team.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds value by specifying the exact data returned (usage, quota, retention, plan), which is not in annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that covers all key aspects without unnecessary words. It front-loads the action and resource, making it easy to scan.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters, rich annotations, and no output schema, the description is mostly complete. It could optionally mention that the data is read-only (already in annotations) or that it returns the current month's data, but that is already covered. Minor gap: not specifying if the data is live or cached.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are zero parameters, so the schema coverage is effectively 100%. The description does not need to add parameter info, and it clearly states what the tool returns, compensating for the lack of output schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies exactly what the tool returns: current month's custom-event usage, monthly quota, retention days, and plan name. It clearly identifies the resource (usage) and the action (return/get), and it distinguishes itself from sibling tools which focus on different resources like events or pages.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing team usage and plan information, but it does not explicitly state when to use this tool versus alternatives like events_count or list_projects. No guidance on prerequisites or limitations is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_funnelsA
Read-onlyIdempotent
Inspect

List active funnels defined on a project. A funnel is a saved ordered sequence of steps (events or pageview paths) that Convalytics computes step-by-step conversion for. Soft-deleted funnels are excluded.

ParametersJSON Schema
NameRequiredDescriptionDefault
projectYesProject name (case-insensitive, e.g. 'slopbench') or project id from list_projects.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, idempotentHint, destructiveHint. The description adds that soft-deleted funnels are excluded, which is an important behavioral detail beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no fluff. The action verb 'List' is front-loaded, and every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (one param, no output schema) and rich annotations, the description covers purpose and a key exclusion. It implies the return is a list, which is sufficient for a list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter description for 'project'. The description does not add extra meaning beyond the schema, so baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists active funnels and defines what a funnel is. It distinguishes from siblings like get_funnel (single) and compute_funnel (computation) by specifying 'list' and 'active'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly guides when to use (to see all active funnels) and mentions exclusion of soft-deleted funnels. While it doesn't explicitly name alternatives, the sibling context makes it clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_projectsA
Read-onlyIdempotent
Inspect

List all Convalytics projects on the team this token belongs to. Useful when the agent needs to confirm the project it's querying against. No arguments.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide strong behavioral hints (readOnlyHint, idempotentHint, destructiveHint false). The description adds that it requires no arguments, which is consistent with the schema. No contradiction, and the description confirms safe, read-only behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences, no filler. Front-loaded: first sentence states the purpose clearly. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has zero parameters, no output schema, and a simple purpose, the description is complete. It states what the tool does and when to use it. No additional information is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and there are zero parameters. The description confirms no arguments. Baseline for no params is 4, but the description does not add semantics beyond 'no arguments' (already in schema). So a 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'List all Convalytics projects on the team this token belongs to', which is a specific verb (list) and resource (projects). It clearly distinguishes from siblings by mentioning it lists all projects the token can access, while other tools like 'recent_events' or 'top_pages' have different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says 'Useful when the agent needs to confirm the project it's querying against', indicating a specific use case. It implies this is for confirmation rather than filtering, which helps differentiate from queries. However, it lacks explicit when-not-to-use or alternative tool mentions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pageviews_countA
Read-onlyIdempotent
Inspect

Count page views for a specific project in a time window. Page views are the automatic hits captured by the browser script tag (separate from custom events). Use this for web-traffic questions like 'how many pageviews in the last 24 hours'. Default window is the last 7 days. Pass user to scope to one visitor.

ParametersJSON Schema
NameRequiredDescriptionDefault
userNoOptional. Filter to one visitor/user. Accepts userEmail (case-insensitive) or visitorId (exact). For the full per-user snapshot prefer user_activity.
sinceNoStart of window as unix milliseconds. Defaults to 7 days ago.
untilNoEnd of window as unix milliseconds. Defaults to now.
projectYesProject name (case-insensitive, e.g. 'slopbench') or project id from list_projects.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the description does not need to repeat safety. The description adds behavioral context: pageviews are distinct from custom events, default window is 7 days, and scoping to user is possible. It doesn't document pagination or return format, but that is acceptable given no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, each serving a purpose: defines action and scope, clarifies distinction from custom events, gives use-case example, explains default and user option. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple count tool with 4 params (all described in schema) and good annotations, the description covers the key behavioral and usage aspects. Minor omission: no mention of pagination or result format, but as a count tool, the output is likely a single number, so not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds value by explaining the default window (7 days) for since/until, and clarifying that user accepts email or visitorId with case-insensitivity, and directs to user_activity for detailed per-user snapshot. This elevates it above schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it counts pageviews for a project within a time window, distinguishes from custom events by specifying 'automatic hits captured by the browser script tag', and provides an example question. It differentiates from sibling tools like events_count, top_pages, and user_activity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to use for 'web-traffic questions like how many pageviews in the last 24 hours'. It also mentions default window (last 7 days) and suggests using user_activity for a per-user snapshot, providing an explicit alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_eventsA
Read-onlyIdempotent
Inspect

Return the most recent custom events for a specific project, optionally filtered to one event name and/or one user. PII (userEmail, userName, props) is redacted by default; pass redact: false to include them.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoOptional event name to filter by (e.g. 'signup_completed'). Omit to return events of any name.
userNoOptional. Filter to one visitor/user. Accepts userEmail (case-insensitive) or visitorId (exact). For the full per-user snapshot prefer user_activity.
limitNoMaximum number of events to return. Default 20, max 100.
redactNoIf true (default), userEmail/userName are null and props is {}. Set to false to include them.
projectYesProject name (case-insensitive, e.g. 'slopbench') or project id from list_projects.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint and destructiveHint false. The description adds critical detail: PII is redacted by default, and the 'redact' parameter can disable this. This goes beyond annotations by explaining the privacy behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, moderately long sentence that covers purpose, filters, and redaction behavior. It is efficient but could be slightly more structured for readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (5 parameters, optional filters, redaction) and no output schema, the description is thorough. It explains all key behaviors: filtering options, default redaction, the ability to disable redaction, and the tool's scope (recent events vs. user snapshot). No significant gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes all parameters thoroughly (100% coverage). The description adds context by explaining the redaction behavior and clarifying that 'user' accepts both email and visitor ID, and also mentions the limit default and max. This adds value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns 'most recent custom events' for a specific project, with optional filters by event name and user. It distinguishes from sibling tools by noting that for a full per-user snapshot, one should use 'user_activity' instead.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies when to use the optional filters (by name, user) and even points to an alternative tool ('user_activity') for comprehensive user snapshots, but does not explicitly mention when not to use this tool or other alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

top_pagesA
Read-onlyIdempotent
Inspect

Return the top pages for a specific project, ranked by views in a time window. Default window is the last 7 days. Use list_projects first if you don't know the project name. Returns path, views, uniqueVisitors, and percentage of total views for each page. Pass user to see pages a specific visitor hit.

ParametersJSON Schema
NameRequiredDescriptionDefault
userNoOptional. Filter to one visitor/user. Accepts userEmail (case-insensitive) or visitorId (exact). For the full per-user snapshot prefer user_activity.
limitNoMaximum number of pages to return. Default 20, max 50.
sinceNoStart of window as unix milliseconds. Defaults to 7 days ago.
untilNoEnd of window as unix milliseconds. Defaults to now.
projectYesProject name (case-insensitive, e.g. 'slopbench') or project id from list_projects.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false; the description adds value by explaining that results are ranked by views, that it returns paths/views/uniqueVisitors/percentage, and describes time window behavior. Minor lack of warning about potential timeouts or large windows, but overall transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is four sentences, each adding value: purpose, defaults, prerequisite, return fields, and user filter guidance. No wasted words; information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has 5 parameters (100% schema coverage), no output schema, and comprehensive annotations. Description adequately covers return values and usage patterns. Could mention that limit is capped at 50, but schema covers that. Lacks description of sorting order (views descending?) but implied by 'ranked by views.' Overall complete given complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for each parameter. The description adds context beyond schema: clarifies that `user` filter is optional and that `user_activity` is preferred for full snapshot. Also explains that `project` accepts name or id. Slight redundancy but helpful.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool returns top pages ranked by views in a time window for a specific project. It distinguishes from sibling tools like `user_activity` (full per-user snapshot) and `list_projects` (prerequisite). The verb 'Return' and resource 'top pages' are specific and actionable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly guides to use `list_projects` first if project name is unknown. Also advises using `user_activity` for full per-user snapshot when filtering by user, indicating when not to use this tool. The default window of last 7 days is stated, aiding contextual decision.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

top_referrersA
Read-onlyIdempotent
Inspect

Return the top referring hosts for a specific project, ranked by visit count in a time window. Includes '(direct)' for visits with no referrer. Default window is the last 7 days.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of referrers to return. Default 10, max 50.
sinceNoStart of window as unix milliseconds. Defaults to 7 days ago.
untilNoEnd of window as unix milliseconds. Defaults to now.
projectYesProject name (case-insensitive, e.g. 'slopbench') or project id from list_projects.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint as false, so the description's main job is to add behavioral context beyond those. The description adds that it includes '(direct)' for no referrer and defaults to last 7 days, which is useful and consistent with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no wasted words. It states the purpose, includes a notable feature ((direct)), and mentions defaults. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and moderate complexity (4 params, 1 required), the description covers the key behavioral aspects (ranking, default window, direct visits). It doesn't discuss pagination or rate limits, but for a read-only analytics tool with idempotent annotations, this is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents all parameters. The description adds no additional parameter details beyond what the schema provides, but this is acceptable because the schema is already comprehensive. The description does mention the default window (last 7 days) and the '(direct)' behavior, which is helpful context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns top referring hosts for a project, ranked by visit count, and mentions the special case of '(direct)'. This differentiates it from siblings like top_pages (which returns top pages) and events_count (which counts events).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (to get top referrers) but does not explicitly state when not to use it or mention alternatives. However, the sibling tools (events_count, top_pages, etc.) suggest other analytics queries, and the description is clear enough to avoid confusion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_funnelA
Idempotent
Inspect

Patch an existing funnel. Any subset of name/description/steps/conversionWindowMs. Refuses updates on deleted funnels.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNo
stepsNo
funnelIdYes
descriptionNo
conversionWindowMsNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate idempotentHint=true and destructiveHint=false. The description adds valuable behavioral info: it supports partial updates ('Any subset') and refuses updates on deleted funnels. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence that packs purpose, supported fields, and a behavioral constraint. No fluff or redundant information. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description could mention the return value (e.g., updated funnel object) or error cases. It covers main points but lacks details on success response or validation errors. Adequate but not comprehensive for a mutation tool with parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so description must compensate. It lists 'name/description/steps/conversionWindowMs' but omits funnelId (required). It provides no details on parameter constraints, format, or the nested steps structure (kind, match, label). This is insufficient for a 5-parameter tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'patch' and resource 'existing funnel', lists the updatable fields 'name/description/steps/conversionWindowMs', and adds a constraint 'Refuses updates on deleted funnels'. This effectively distinguishes it from siblings like create_funnel (creates new) and delete_funnel (deletes).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives (e.g., when to update vs create). It implies usage for updating an existing funnel but provides no exclusions or alternative references. The constraint on deleted funnels is a behavior note, not a clear usage guideline.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

user_activityA
Read-onlyIdempotent
Inspect

Composite snapshot of a specific user's activity on a project. Returns an identity block (visitorId, userEmail, userName, firstSeen, lastSeen), total pageviews, total custom events, session count, top pages this user visited, their most-fired event names, and their 20 most recent events with props. Use this for 'how is dancleary54@gmail.com using my app?' style questions — one call, full picture. For ad-hoc drill-down (just a count, just recent events) pass user to the individual tools instead. Default window is the last 7 days.

ParametersJSON Schema
NameRequiredDescriptionDefault
userYesUser identifier. Accepts userEmail (case-insensitive, e.g. 'dan@example.com') or visitorId (the exact string passed as userId on the original track() call).
sinceNoStart of window as unix milliseconds. Defaults to 7 days ago.
untilNoEnd of window as unix milliseconds. Defaults to now.
projectYesProject name (case-insensitive, e.g. 'slopbench') or project id from list_projects.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. Description adds that default window is last 7 days, but doesn't mention rate limits or what happens if user not found. Still adds useful context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is two sentences with good front-loading of purpose. Could be slightly more concise but no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description details the return structure (identity block, counts, top pages, recent events). For a composite tool, this is complete enough for an agent to understand what it gets.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% parameters, but description explains what `user` accepts (email or visitorId) and clarifies defaults for `since` and `until`. Provides meaning beyond schema for the user parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it returns a 'composite snapshot' of user activity on a project, listing specific data fields. It distinguishes itself from siblings by noting this is a one-call full picture vs individual tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use this tool ('how is dancleary54@gmail.com using my app?' questions) and when not to (ad-hoc drill-down), directing to pass `user` to individual tools instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

weekly_digestA
Read-onlyIdempotent
Inspect

Composite snapshot of a project's web analytics over a lookback window. Returns unique visitors, pageviews, sessions, bounce rate, average session duration, top 5 pages, top 5 referrers, total custom events, and top 5 event names. Includes period-over-period comparison against the prior equal-length window unless compare: false. Prefer this over chaining top_pages + top_referrers + events_count when the agent just wants to report on the week.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoLookback window in days, 1 to 90. Default 7.
compareNoInclude period-over-period comparison against the prior equal-length window. Default true. Set false for faster response when only current numbers matter.
projectYesProject name (case-insensitive, e.g. 'slopbench') or project id from list_projects.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint (all safe). The description adds behavioral details: it includes period-over-period comparison unless compare:false, and mentions faster response when setting compare false. This goes beyond annotations, though no output schema exists to detail return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, each earning its place. First sentence lists all return fields comprehensively; second provides usage guidance and parameter nuance. No fluff, critical info front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description lists all returned metrics, which is sufficient for an agent. The parameter descriptions in schema are adequate. Minor gap: does not mention that there is no pagination or limit on top 5 lists, but that is implied. Still very complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds context by explaining the default and effect of 'compare' on response speed, and clarifies 'project' case-insensitivity. This justifies a 4 over baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the verb 'snapshot' and the resource 'web analytics' with specific metrics listed. It distinguishes itself from siblings like top_pages and top_referrers by stating it is a composite alternative, avoiding confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Prefer this over chaining top_pages + top_referrers + events_count when the agent just wants to report on the week,' providing clear guidance on when to use this tool and when not to, with direct alternatives named.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.