Skip to main content
Glama
nks-hub

rybbit-mcp

by nks-hub

Metric Breakdown

rybbit_get_metric
Read-onlyIdempotent

Analyze website metrics by dimension to track performance across pages, traffic sources, user devices, and geographic locations.

Instructions

Get metric breakdown by dimension. Use parameter='pathname' for top pages, 'browser'/'operating_system'/'device_type' for tech stats, 'country'/'city' for geo, 'utm_source'/'utm_campaign' for marketing, 'referrer'/'channel' for traffic sources, 'entry_page'/'exit_page' for user flow. Returns sorted list with counts, percentages, bounce rate, and session duration.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
siteIdYesSite ID (numeric ID or domain identifier)
startDateNoStart date in ISO format (YYYY-MM-DD)
endDateNoEnd date in ISO format (YYYY-MM-DD)
timeZoneNoIANA timezone (e.g., Europe/Prague). Default: UTC
filtersNoArray of filters. Example: [{parameter:'browser',type:'equals',value:['Chrome']},{parameter:'country',type:'equals',value:['US','DE']}]
pastMinutesStartNoAlternative to dates: minutes ago start (e.g., 60 = last hour)
pastMinutesEndNoAlternative to dates: minutes ago end (default 0 = now)
parameterYesMetric dimension to break down by
pageNoPage number, 1-indexed (default: 1)
limitNoResults per page (default: 20-50 depending on endpoint, max 200)

Implementation Reference

  • The handler function for the rybbit_get_metric tool, which constructs analytics parameters and fetches metric data from the Rybbit client.
    async (args) => {
      try {
        const { siteId, parameter, page, limit, ...rest } = args as {
          siteId: string;
          parameter: z.infer<typeof metricParameterSchema>;
          page?: number;
          limit?: number;
          startDate?: string;
          endDate?: string;
          timeZone?: string;
          filters?: Array<{ parameter: string; type: string; value: (string | number)[] }>;
          pastMinutesStart?: number;
          pastMinutesEnd?: number;
        };
    
        const params = client.buildAnalyticsParams({ ...rest, page, limit });
        params.parameter = parameter;
    
        const data = await client.get<MetricEntry[]>(
          `/sites/${siteId}/metric`,
          params
        );
    
        return {
          content: [
            {
              type: "text" as const,
              text: truncateResponse(data),
            },
          ],
        };
      } catch (err) {
        const message = err instanceof Error ? err.message : String(err);
        return {
          content: [{ type: "text" as const, text: `Error: ${message}` }],
          isError: true,
        };
      }
    }
  • The registration of the rybbit_get_metric tool, including its schema and metadata definition.
    server.registerTool(
      "rybbit_get_metric",
      {
        title: "Metric Breakdown",
        description:
          "Get metric breakdown by dimension. Use parameter='pathname' for top pages, 'browser'/'operating_system'/'device_type' for tech stats, 'country'/'city' for geo, 'utm_source'/'utm_campaign' for marketing, 'referrer'/'channel' for traffic sources, 'entry_page'/'exit_page' for user flow. Returns sorted list with counts, percentages, bounce rate, and session duration.",
        annotations: {
          readOnlyHint: true,
          idempotentHint: true,
          openWorldHint: true,
          destructiveHint: false,
        },
        inputSchema: {
          ...analyticsInputSchema,
          parameter: metricParameterSchema,
          ...paginationSchema,
        },
      },
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only, idempotent, non-destructive traits. The description adds valuable behavioral context by disclosing the return structure (sorted list with counts, percentages, bounce rate, session duration) and implying sorting behavior, which compensates for the missing output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences with zero waste: purpose declaration, parameter usage guide, and return value description. Front-loaded with the core action and densely packed with actionable parameter mappings without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex 10-parameter analytics tool with no output schema, the description adequately covers the return structure and parameter semantics. Could improve by explicitly noting pagination behavior (page/limit params exist in schema but return behavior isn't described).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage, the description adds significant semantic value by mapping abstract parameter values to concrete analytical goals (e.g., 'utm_source' for marketing analysis). This contextual translation from technical enums to business questions is highly useful for agent selection.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves metric breakdowns by dimension, using specific verbs ('Get', 'breakdown') and resource references. It effectively distinguishes from siblings like rybbit_get_overview or rybbit_get_timeseries by emphasizing the dimensional analysis capability and specific return metrics (bounce rate, session duration).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent concrete guidance mapping parameter values to business use cases (e.g., 'pathname' for top pages, 'browser' for tech stats, 'country' for geo). However, it lacks explicit comparison to sibling tools (e.g., when to use this versus rybbit_get_overview or rybbit_get_journeys).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nks-hub/rybbit-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server