Skip to main content
Glama
jafforgehq

SEO Analytics MCP

by jafforgehq

analytics_generate_action_items

Generate prioritized SEO and content action items by analyzing merged Google Search Console and Analytics data to identify optimization opportunities.

Instructions

Generate prioritized SEO and content action items from merged data.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
site_urlNo
property_idNo
start_dateNo
end_dateNo
include_previous_periodNo
max_rowsNo
max_itemsNo
prioritiesNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Main MCP tool handler function that generates prioritized SEO and content action items. It fetches merged page data from GSC and GA4, calls generate_action_items() to analyze pages, filters by optional priority levels, and returns a structured summary with counts and the prioritized action items list.
    @mcp.tool()
    def analytics_generate_action_items(
        site_url: str | None = None,
        property_id: str | None = None,
        start_date: str | None = None,
        end_date: str | None = None,
        include_previous_period: bool = True,
        max_rows: int = 50000,
        max_items: int | None = None,
        priorities: list[str] | None = None,
    ) -> dict[str, Any]:
        """Generate prioritized SEO and content action items from merged data."""
        settings = _get_settings()
        data = _fetch_page_data(
            site_url,
            property_id,
            start_date,
            end_date,
            include_previous_period=include_previous_period,
            max_rows=max_rows,
        )
    
        items = generate_action_items(data["merged_pages"], settings, max_items=max_items)
    
        if priorities:
            allowed = {p.lower().strip() for p in priorities}
            items = [i for i in items if str(i.get("priority", "")).lower() in allowed]
    
        priority_counts = Counter(item["priority"] for item in items)
        category_counts = Counter(item["category"] for item in items)
    
        return {
            "ranges": data["ranges"],
            "site_url": data["site_url"],
            "property_id": data["property_id"],
            "summary": {
                "total_items": len(items),
                "priority_counts": dict(priority_counts),
                "category_counts": dict(category_counts),
                "portfolio": summarize_portfolio(data["merged_pages"]),
            },
            "items": items,
        }
  • Core analysis function that iterates through merged page metrics, scores each page using score_page(), builds comprehensive action item objects with scores, priorities, categories, reasons, recommendations, and supporting evidence, then sorts by score/confidence and returns the top N items.
    def generate_action_items(
        merged_pages: list[dict[str, Any]],
        settings: Settings,
        *,
        max_items: int | None = None,
    ) -> list[dict[str, Any]]:
        limit = max_items if max_items is not None else settings.default_max_action_items
        items: list[dict[str, Any]] = []
    
        for page in merged_pages:
            result = score_page(page, settings)
            if result.score <= 0:
                continue
    
            item = {
                "url": page["url"],
                "score": result.score,
                "priority": result.priority,
                "category": result.categories[0] if result.categories else "opportunity",
                "categories": result.categories,
                "expected_impact": result.expected_impact,
                "effort": result.effort,
                "confidence": result.confidence,
                "reasons": result.reasons,
                "recommended_actions": result.recommendations,
                "evidence": {
                    "gsc_impressions": round(float(page.get("gsc_impressions", 0.0)), 2),
                    "gsc_clicks": round(float(page.get("gsc_clicks", 0.0)), 2),
                    "gsc_ctr": round(float(page.get("gsc_ctr", 0.0)), 4),
                    "gsc_position": round(float(page.get("gsc_position", 0.0)), 2),
                    "ga4_sessions": round(float(page.get("ga4_sessions", 0.0)), 2),
                    "ga4_engagement_rate": round(
                        float(page.get("ga4_engagement_rate", 0.0)), 4
                    ),
                    "ga4_conversion_rate": round(
                        float(page.get("ga4_conversion_rate", 0.0)), 4
                    ),
                    "gsc_clicks_delta_pct": page.get("gsc_clicks_delta_pct"),
                    "ga4_sessions_delta_pct": page.get("ga4_sessions_delta_pct"),
                },
            }
            items.append(item)
    
        items.sort(key=lambda i: (i["score"], i["confidence"]), reverse=True)
        return items[: max(1, limit)]
  • Page scoring engine that analyzes GSC (impressions, clicks, CTR, position) and GA4 (sessions, conversion rate, engagement) metrics along with period deltas. Identifies CTR optimization, conversion optimization, content refresh, and scale opportunities. Calculates composite scores, priority levels (high/medium/low), expected impact, effort estimates, and confidence levels.
    def score_page(page: dict[str, Any], settings: Settings) -> ScoreResult:
        score = 0.0
        categories: list[str] = []
        reasons: list[str] = []
        recommendations: list[str] = []
    
        impressions = float(page.get("gsc_impressions", 0.0))
        clicks = float(page.get("gsc_clicks", 0.0))
        ctr = float(page.get("gsc_ctr", 0.0))
        position = float(page.get("gsc_position", 0.0))
    
        sessions = float(page.get("ga4_sessions", 0.0))
        conversion_rate = float(page.get("ga4_conversion_rate", 0.0))
        engagement_rate = float(page.get("ga4_engagement_rate", 0.0))
    
        clicks_delta = page.get("gsc_clicks_delta_pct")
        sessions_delta = page.get("ga4_sessions_delta_pct")
    
        if impressions >= settings.min_impressions_for_ctr_action and ctr < settings.target_ctr:
            ctr_gap = (settings.target_ctr - ctr) / max(settings.target_ctr, 1e-6)
            ctr_score = min(45.0, ctr_gap * 45.0 + _log_scale(impressions, 6.0))
            score += ctr_score
            categories.append("ctr_optimization")
            reasons.append(
                "High impressions with below-target CTR indicate snippet/title opportunity."
            )
            recommendations.extend(
                [
                    "Rewrite title and meta description to better match dominant queries.",
                    "Test stronger value proposition in the first 60 title characters.",
                ]
            )
    
        if sessions >= settings.min_sessions_for_conversion_action and conversion_rate < settings.target_conversion_rate:
            cr_gap = (settings.target_conversion_rate - conversion_rate) / max(
                settings.target_conversion_rate,
                1e-6,
            )
            cr_score = min(45.0, cr_gap * 42.0 + _log_scale(sessions, 6.0))
            score += cr_score
            categories.append("conversion_optimization")
            reasons.append(
                "Strong traffic with weak conversion efficiency suggests on-page UX/content friction."
            )
            recommendations.extend(
                [
                    "Strengthen above-the-fold CTA and internal next-step links.",
                    "Add trust proof and tighten informational-to-commercial transition sections.",
                ]
            )
    
        if isinstance(clicks_delta, (int, float)) and clicks_delta <= -0.2:
            drop_score = min(30.0, abs(clicks_delta) * 60.0)
            score += drop_score
            categories.append("content_refresh")
            reasons.append("Organic clicks are declining versus the previous period.")
            recommendations.append(
                "Refresh outdated sections and compare SERP competitors for intent drift."
            )
    
        if isinstance(sessions_delta, (int, float)) and sessions_delta <= -0.2:
            drop_score = min(30.0, abs(sessions_delta) * 55.0)
            score += drop_score
            categories.append("content_refresh")
            reasons.append("On-site sessions are declining versus the previous period.")
            recommendations.append(
                "Audit UX changes, page speed, and content relevance for recent traffic loss."
            )
    
        if (
            impressions >= settings.min_impressions_for_ctr_action
            and sessions >= settings.min_sessions_for_conversion_action
            and ctr >= settings.target_ctr
            and conversion_rate >= settings.target_conversion_rate
        ):
            scale_score = min(35.0, _log_scale(clicks + sessions, 7.0) + 10.0)
            score += scale_score
            categories.append("scale_winner")
            reasons.append("Page performs well in both acquisition and conversion.")
            recommendations.extend(
                [
                    "Expand topic cluster around this page's highest-performing query themes.",
                    "Promote this page via internal links from adjacent intent pages.",
                ]
            )
    
        if position > 8 and impressions > 0:
            score += min(12.0, (position - 8) * 1.5)
            reasons.append("Average position indicates page may be near page-one threshold.")
    
        if not categories and score <= 0:
            return ScoreResult(
                score=0.0,
                categories=[],
                reasons=[],
                recommendations=[],
                priority="low",
                expected_impact="low",
                effort="low",
                confidence=0.0,
            )
    
        unique_categories = sorted(set(categories))
        unique_reasons = list(dict.fromkeys(reasons))
        unique_recommendations = list(dict.fromkeys(recommendations))
    
        sources = 0
        if impressions > 0:
            sources += 1
        if sessions > 0:
            sources += 1
    
        volume_factor = 0.0
        if impressions > 0:
            volume_factor += min(0.25, math.log10(impressions + 1) / 10)
        if sessions > 0:
            volume_factor += min(0.25, math.log10(sessions + 1) / 10)
    
        confidence = min(1.0, 0.35 + 0.2 * sources + volume_factor)
    
        return ScoreResult(
            score=round(score, 2),
            categories=unique_categories,
            reasons=unique_reasons,
            recommendations=unique_recommendations,
            priority=_priority_from_score(score),
            expected_impact=_expected_impact(score),
            effort=_effort(unique_categories),
            confidence=round(confidence, 2),
        )
  • Tool name registered in the capabilities() function list of available tools, which exposes it to MCP clients as a callable tool.
    "analytics_generate_action_items",
  • Settings dataclass defining configuration parameters used by the action item generator, including min_impressions_for_ctr_action, min_sessions_for_conversion_action, target_ctr, target_conversion_rate, and default_max_action_items thresholds that control when action items are generated.
    @dataclass(frozen=True)
    class Settings:
        enable_gsc: bool
        enable_ga4: bool
        require_explicit_gsc_site_url: bool
        default_gsc_site_url: str | None
        default_ga4_property_id: str | None
        default_lookback_days: int
        canonical_base_url: str | None
    
        min_impressions_for_ctr_action: int
        min_sessions_for_conversion_action: int
        target_ctr: float
        target_conversion_rate: float
        default_max_action_items: int
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'prioritized' action items but doesn't explain how prioritization works, what format the output takes, whether this is a read-only or write operation, performance characteristics, or any limitations. The description is too brief to provide meaningful behavioral context for an 8-parameter tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise - a single sentence that gets straight to the point with no wasted words. It's front-loaded with the core functionality. While it may be too brief for completeness, it's structurally efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 8 parameters with 0% schema description coverage and no annotations, the description is insufficiently complete. While an output schema exists (which helps with return values), the description doesn't address the purpose of parameters, behavioral characteristics, or usage context needed for a tool of this complexity. The single sentence description leaves too many gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning none of the 8 parameters have descriptions in the schema. The tool description provides no information about any parameters - it doesn't mention site_url, property_id, date ranges, priorities, or any other inputs. This leaves all parameters completely undocumented, which is inadequate for a tool with this many inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'Generate prioritized SEO and content action items from merged data', which provides a clear verb ('Generate') and resource ('action items'), but it's somewhat vague about what 'merged data' refers to and doesn't specifically differentiate this tool from sibling tools like 'analytics_query_page_opportunities' or 'analytics_topic_clusters' that might also generate insights or recommendations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is provided on when to use this tool versus alternatives. The description mentions 'merged data' but doesn't clarify what data sources are merged or prerequisites for using this tool. There's no mention of when-not-to-use scenarios or comparisons with sibling tools that might handle similar tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jafforgehq/google-analytics-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server