Skip to main content
Glama
houtini-ai

Better Google Search Console

by houtini-ai

compare_periods

Compare two date ranges side-by-side across queries, pages, devices, or countries to analyze performance changes with absolute and percentage differences.

Instructions

Compare two arbitrary date ranges side-by-side across any dimension (query, page, device, country). Shows absolute and percentage changes. Useful for before/after analysis, seasonal comparisons, or measuring the impact of changes. Requires synced data — run setup first if needed.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
siteUrlYesGSC property URL.
period1StartYesPeriod 1 start date (YYYY-MM-DD).
period1EndYesPeriod 1 end date (YYYY-MM-DD).
period2StartYesPeriod 2 start date (YYYY-MM-DD).
period2EndYesPeriod 2 end date (YYYY-MM-DD).
dimensionNoDimension to group by. Default: "query".
limitNoMax rows. Default: 50.
pageFilterNoFilter by URL path (uses LIKE).

Implementation Reference

  • Main handler function that implements the compare_periods tool logic. It compares two arbitrary date ranges side-by-side across any dimension (query, page, device, country), calculating summary metrics and dimension breakdowns with absolute and percentage changes.
    export function comparePeriods(params: ComparePeriodParams): any {
      const {
        siteUrl,
        period1Start,
        period1End,
        period2Start,
        period2End,
        dimension = 'query',
        limit = 50,
        pageFilter,
      } = params;
    
      const dbPath = getDbPath(siteUrl);
      if (!existsSync(dbPath)) {
        throw new Error(`No database found for "${siteUrl}". Run sync_gsc_data first.`);
      }
    
      const validDimensions = ['query', 'page', 'device', 'country'];
      if (!validDimensions.includes(dimension)) {
        throw new Error(`Invalid dimension: "${dimension}". Valid: ${validDimensions.join(', ')}`);
      }
    
      const db = new Database(dbPath);
      try {
        let pageFilterClause = '';
        const extraValues: any[] = [];
        if (pageFilter) {
          pageFilterClause = ' AND page LIKE ?';
          extraValues.push(`%${pageFilter}%`);
        }
    
        // Summary totals for each period
        const period1Summary = db.queryOne(`
          SELECT
            COALESCE(SUM(clicks), 0) as clicks,
            COALESCE(SUM(impressions), 0) as impressions,
            ROUND(CAST(SUM(clicks) AS REAL) / NULLIF(SUM(impressions), 0), 4) as ctr,
            ROUND(AVG(position), 1) as avg_position
          FROM search_analytics
          WHERE date BETWEEN ? AND ?${pageFilterClause}
        `, [period1Start, period1End, ...extraValues]);
    
        const period2Summary = db.queryOne(`
          SELECT
            COALESCE(SUM(clicks), 0) as clicks,
            COALESCE(SUM(impressions), 0) as impressions,
            ROUND(CAST(SUM(clicks) AS REAL) / NULLIF(SUM(impressions), 0), 4) as ctr,
            ROUND(AVG(position), 1) as avg_position
          FROM search_analytics
          WHERE date BETWEEN ? AND ?${pageFilterClause}
        `, [period2Start, period2End, ...extraValues]);
    
        // Dimension breakdown
        const dimFilter = dimension === 'query' ? ' AND query IS NOT NULL' :
                          dimension === 'page' ? ' AND page IS NOT NULL' : '';
    
        const rows = db.query(`
          SELECT
            ${dimension} as dimension_value,
            SUM(CASE WHEN date BETWEEN ? AND ? THEN clicks ELSE 0 END) as period1_clicks,
            SUM(CASE WHEN date BETWEEN ? AND ? THEN impressions ELSE 0 END) as period1_impressions,
            ROUND(
              CAST(SUM(CASE WHEN date BETWEEN ? AND ? THEN clicks ELSE 0 END) AS REAL)
              / NULLIF(SUM(CASE WHEN date BETWEEN ? AND ? THEN impressions ELSE 0 END), 0), 4
            ) as period1_ctr,
            SUM(CASE WHEN date BETWEEN ? AND ? THEN clicks ELSE 0 END) as period2_clicks,
            SUM(CASE WHEN date BETWEEN ? AND ? THEN impressions ELSE 0 END) as period2_impressions,
            ROUND(
              CAST(SUM(CASE WHEN date BETWEEN ? AND ? THEN clicks ELSE 0 END) AS REAL)
              / NULLIF(SUM(CASE WHEN date BETWEEN ? AND ? THEN impressions ELSE 0 END), 0), 4
            ) as period2_ctr,
            SUM(CASE WHEN date BETWEEN ? AND ? THEN clicks ELSE 0 END)
              - SUM(CASE WHEN date BETWEEN ? AND ? THEN clicks ELSE 0 END) as click_change,
            ROUND(
              (SUM(CASE WHEN date BETWEEN ? AND ? THEN clicks ELSE 0 END)
               - SUM(CASE WHEN date BETWEEN ? AND ? THEN clicks ELSE 0 END))
              * 100.0
              / NULLIF(SUM(CASE WHEN date BETWEEN ? AND ? THEN clicks ELSE 0 END), 0),
              1
            ) as click_change_pct
          FROM search_analytics
          WHERE (date BETWEEN ? AND ? OR date BETWEEN ? AND ?)${dimFilter}${pageFilterClause}
          GROUP BY ${dimension}
          HAVING period1_clicks > 0 OR period2_clicks > 0
          ORDER BY period2_clicks DESC
          LIMIT ?
        `, [
          // period1 clicks
          period1Start, period1End,
          // period1 impressions
          period1Start, period1End,
          // period1 ctr numerator
          period1Start, period1End,
          // period1 ctr denominator
          period1Start, period1End,
          // period2 clicks
          period2Start, period2End,
          // period2 impressions
          period2Start, period2End,
          // period2 ctr numerator
          period2Start, period2End,
          // period2 ctr denominator
          period2Start, period2End,
          // click_change
          period2Start, period2End,
          period1Start, period1End,
          // click_change_pct numerator
          period2Start, period2End,
          period1Start, period1End,
          // click_change_pct denominator
          period1Start, period1End,
          // WHERE date ranges
          period1Start, period1End,
          period2Start, period2End,
          ...extraValues,
          limit,
        ]);
    
        const pctChange = (curr: number, prev: number): number | null =>
          prev === 0 ? null : Math.round(((curr - prev) / prev) * 1000) / 10;
    
        return {
          dimension,
          period1: { startDate: period1Start, endDate: period1End },
          period2: { startDate: period2Start, endDate: period2End },
          summary: {
            period1: period1Summary,
            period2: period2Summary,
            changes: {
              clicks: period2Summary.clicks - period1Summary.clicks,
              clicksPct: pctChange(period2Summary.clicks, period1Summary.clicks),
              impressions: period2Summary.impressions - period1Summary.impressions,
              impressionsPct: pctChange(period2Summary.impressions, period1Summary.impressions),
            },
          },
          rows,
        };
      } finally {
        db.close();
      }
    }
  • TypeScript interface defining the input parameters for the compare_periods tool, including siteUrl, date ranges for both periods, dimension, limit, and pageFilter.
    export interface ComparePeriodParams {
      siteUrl: string;
      period1Start: string;
      period1End: string;
      period2Start: string;
      period2End: string;
      dimension?: string;
      limit?: number;
      pageFilter?: string;
    }
  • src/server.ts:287-308 (registration)
    MCP tool registration that defines the tool name 'compare_periods', its description, input schema using Zod validation, and async handler that calls the comparePeriods function.
    server.tool(
      'compare_periods',
      'Compare two arbitrary date ranges side-by-side across any dimension (query, page, device, country). Shows absolute and percentage changes. Useful for before/after analysis, seasonal comparisons, or measuring the impact of changes. Requires synced data — run setup first if needed.',
      {
        siteUrl: z.string().describe('GSC property URL.'),
        period1Start: z.string().describe('Period 1 start date (YYYY-MM-DD).'),
        period1End: z.string().describe('Period 1 end date (YYYY-MM-DD).'),
        period2Start: z.string().describe('Period 2 start date (YYYY-MM-DD).'),
        period2End: z.string().describe('Period 2 end date (YYYY-MM-DD).'),
        dimension: z.enum(['query', 'page', 'device', 'country']).optional().describe('Dimension to group by. Default: "query".'),
        limit: z.number().optional().describe('Max rows. Default: 50.'),
        pageFilter: z.string().optional().describe('Filter by URL path (uses LIKE).'),
      },
      async (args) => {
        try {
          const result = comparePeriods(args);
          return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
        } catch (error) {
          return { content: [{ type: 'text', text: JSON.stringify({ error: (error as Error).message }) }], isError: true };
        }
      }
    );
  • Helper function getDbPath that constructs the database file path from a site URL, used by the compare_periods handler to locate the GSC data database.
    export function getDbPath(siteUrl: string): string {
      return join(getDataDir(), sanitizeSiteUrl(siteUrl) + '.db');
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about the tool's output ('Shows absolute and percentage changes') and data requirements ('Requires synced data'), but does not cover other behavioral aspects like error handling, performance characteristics, or authentication needs, leaving gaps for a tool with 8 parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the first sentence stating the core functionality, followed by supporting details about output and prerequisites. Every sentence earns its place by adding distinct value without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, no output schema, no annotations), the description is reasonably complete for a comparative analysis tool. It covers purpose, usage context, output format, and prerequisites, though it could benefit from more behavioral details or examples to fully compensate for the lack of annotations and output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema by implying the tool's purpose involves date ranges and dimensions, but does not provide additional syntax, format details, or usage examples for parameters beyond what the schema specifies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('compare two arbitrary date ranges side-by-side') and resources ('across any dimension'), distinguishing it from siblings like 'query_gsc_data' or 'get_insights' by focusing on comparative analysis rather than raw data retrieval or general insights.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool ('before/after analysis, seasonal comparisons, or measuring the impact of changes') and includes a prerequisite ('Requires synced data — run setup first if needed'). However, it does not explicitly state when not to use it or name specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/houtini-ai/better-search-console'

If you have feedback or need assistance with the MCP directory API, please join our Discord server