Skip to main content
Glama
Shin-sibainu

GA4 MCP Server

by Shin-sibainu

get_landing_pages

Retrieve landing page analysis from Google Analytics 4 to identify entry points and understand initial user interactions.

Instructions

ランディングページ(ユーザーが最初にアクセスしたページ)の分析結果を取得します。

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
propertyIdNoGA4プロパティID
periodYes集計期間
limitNo取得件数(デフォルト: 10)

Implementation Reference

  • The core handler function that implements the get_landing_pages tool logic: queries GA4 report for landing pages, processes dimensions and metrics, formats data into LandingPage objects.
    export async function getLandingPages(
      input: GetLandingPagesInput
    ): Promise<GetLandingPagesOutput> {
      const propertyId = getPropertyId(input.propertyId);
      const property = formatPropertyPath(propertyId);
      const dateRange = periodToDateRange(input.period);
      const limit = input.limit || 10;
    
      const response = await executeReport({
        property,
        dateRanges: [dateRange],
        dimensions: [{ name: "landingPage" }],
        metrics: [
          { name: "sessions" },
          { name: "totalUsers" },
          { name: "bounceRate" },
          { name: "averageSessionDuration" },
        ],
        orderBys: [{ metric: { metricName: "sessions" }, desc: true }],
        limit,
      });
    
      const landingPages: LandingPage[] = [];
    
      for (const row of response.rows || []) {
        const dimensionValues = row.dimensionValues || [];
        const metricValues = row.metricValues || [];
    
        const getValue = (index: number): number => {
          const value = metricValues[index]?.value;
          return value ? parseFloat(value) : 0;
        };
    
        landingPages.push({
          pagePath: dimensionValues[0]?.value || "",
          entrances: Math.round(getValue(0)),
          users: Math.round(getValue(1)),
          bounceRate: formatPercentageFromDecimal(getValue(2)),
          avgSessionDuration: formatDuration(getValue(3)),
        });
      }
    
      return { landingPages };
    }
  • TypeScript type definitions for GetLandingPagesInput, LandingPage interface, and GetLandingPagesOutput.
    // get_landing_pages
    export interface GetLandingPagesInput extends PropertyId {
      period: ShortPeriod;
      limit?: number;
    }
    
    export interface LandingPage {
      pagePath: string;
      entrances: number;
      users: number;
      bounceRate: string;
      avgSessionDuration: string;
    }
    
    export interface GetLandingPagesOutput {
      landingPages: LandingPage[];
    }
  • src/server.ts:357-377 (registration)
    MCP tool registration entry defining name 'get_landing_pages', description, and inputSchema for validation.
    {
      name: "get_landing_pages",
      description:
        "ランディングページ(ユーザーが最初にアクセスしたページ)の分析結果を取得します。",
      inputSchema: {
        type: "object" as const,
        properties: {
          propertyId: { type: "string", description: "GA4プロパティID" },
          period: {
            type: "string",
            enum: ["7days", "28days", "30days"],
            description: "集計期間",
          },
          limit: {
            type: "number",
            description: "取得件数(デフォルト: 10)",
          },
        },
        required: ["period"],
      },
    },
  • src/server.ts:688-693 (registration)
    Switch case in tool call handler that routes 'get_landing_pages' calls to the getLandingPages function with argument parsing.
    case "get_landing_pages":
      return await getLandingPages({
        propertyId: args.propertyId as string | undefined,
        period: args.period as "7days" | "28days" | "30days",
        limit: args.limit as number | undefined,
      });
  • src/server.ts:26-26 (registration)
    Import of getLandingPages handler from analytics tools index.
    getLandingPages,
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'get analysis results' but doesn't specify what the analysis includes (e.g., metrics like bounce rate, sessions), whether it's read-only, if there are rate limits, or authentication needs. For a tool with no annotations, this is a significant gap, warranting a score of 2.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence in Japanese that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to understand quickly. However, it could be slightly more structured by including key details, but it's concise enough for a score of 4.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (retrieving analytics data), no annotations, and no output schema, the description is incomplete. It doesn't explain the return values or behavioral aspects like data format or limitations. While it states the purpose, it lacks depth for a tool in this context, resulting in a score of 3.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with clear documentation for each parameter (propertyId, period, limit). The description adds no additional meaning beyond the schema, such as explaining how 'landing pages' relate to the parameters. Given the high schema coverage, the baseline is 3, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'get analysis results for landing pages (pages users first accessed).' It specifies the verb 'get' and resource 'landing pages analysis results,' making it understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_top_pages' or 'get_exit_pages,' which might also retrieve page-related data, so it doesn't reach a score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or contexts where this tool is preferred, such as for analyzing initial user interactions compared to other page-related tools. Without any usage instructions, it falls to a score of 2.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Shin-sibainu/ga4-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server