Skip to main content
Glama

google_ads_ad_performance_compare

Compare all ads in a single Google Ads ad group to assign WINNER/LOSER/INSUFFICIENT_DATA verdicts. Get scores, rankings, and recommendations from metrics like impressions, clicks, conversions, and cost.

Instructions

Rank ENABLED ads within a single Google Ads ad group and assign WINNER / LOSER / INSUFFICIENT_DATA verdicts. Returns {ad_group_id, period, ads:[{ad_id, impressions, clicks, conversions, cost, ctr, cvr, cpa, score (ctr*cvr, or ctr when conversions=0), rank, verdict, headlines?, descriptions?}], winner, recommendation, insights:[strings]}. Ads with impressions < 100 are flagged INSUFFICIENT_DATA; all ads tied at the top score receive WINNER, the rest LOSER. Read-only — does not pause or rotate ads. For cross-ad-group per-ad reporting use google_ads_ad_performance_report; for RSA asset-level splits use google_ads_rsa_assets_analyze.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
customer_idNoGoogle Ads customer ID as a 10-digit string without dashes (e.g. '1234567890'). Optional — falls back to GOOGLE_ADS_CUSTOMER_ID / GOOGLE_ADS_LOGIN_CUSTOMER_ID from the configured credentials when omitted.
ad_group_idYesAd group ID as a numeric string (e.g. '145680123456'). Required — comparison is always scoped to one ad group so the ads share targeting. Obtain via google_ads_ad_groups_list.
periodNoReporting window for the metrics. Default 'LAST_30_DAYS'. Use a shorter window (LAST_7_DAYS / LAST_14_DAYS) when diagnosing recent changes; use LAST_90_DAYS for trend baselines.

Implementation Reference

  • The async handler function handle_ad_performance_compare that extracts args and calls client.compare_ad_performance().
    @api_error_handler
    async def handle_ad_performance_compare(
        args: dict[str, Any],
    ) -> list[TextContent]:
        client = _get_client(args)
        if client is None:
            return _no_google_creds()
        result = await client.compare_ad_performance(
            ad_group_id=_require(args, "ad_group_id"),
            period=_opt(args, "period", "LAST_30_DAYS"),
        )
        return _json_result(result)
  • Tool schema/registration definition with name, description, and inputSchema requiring ad_group_id, with optional customer_id and period.
    Tool(
        name="google_ads_ad_performance_compare",
        description=(
            "Rank ENABLED ads within a single Google Ads ad group and "
            "assign WINNER / LOSER / INSUFFICIENT_DATA verdicts. Returns "
            "{ad_group_id, period, ads:[{ad_id, impressions, clicks, "
            "conversions, cost, ctr, cvr, cpa, score (ctr*cvr, or ctr "
            "when conversions=0), rank, verdict, headlines?, "
            "descriptions?}], winner, recommendation, "
            "insights:[strings]}. Ads with impressions < 100 are flagged "
            "INSUFFICIENT_DATA; all ads tied at the top score receive "
            "WINNER, the rest LOSER. Read-only — does not pause or "
            "rotate ads. For cross-ad-group per-ad reporting use "
            "google_ads_ad_performance_report; for RSA asset-level "
            "splits use google_ads_rsa_assets_analyze."
        ),
        inputSchema={
            "type": "object",
            "properties": {
                "customer_id": _CUSTOMER_ID_PARAM,
                "ad_group_id": {
                    "type": "string",
                    "description": (
                        "Ad group ID as a numeric string "
                        "(e.g. '145680123456'). Required — comparison "
                        "is always scoped to one ad group so the ads "
                        "share targeting. Obtain via "
                        "google_ads_ad_groups_list."
                    ),
                },
                "period": _PERIOD_PARAM,
            },
            "required": ["ad_group_id"],
        },
  • Registration mapping the tool name 'google_ads_ad_performance_compare' to the handler handle_ad_performance_compare in the HANDLERS_ANALYSIS dict.
    "google_ads_ad_performance_compare": handle_ad_performance_compare,
  • Core business logic in compare_ad_performance() — fetches ad performance report, filters ENABLED ads, calculates scores (ctr*cvr), assigns WINNER/LOSER/INSUFFICIENT_DATA verdicts, and returns ranked results with recommendation and insights.
    async def compare_ad_performance(
        self,
        ad_group_id: str,
        period: str = "LAST_30_DAYS",
    ) -> dict[str, Any]:
        """Compare ad performance within an ad group."""
        self._validate_id(ad_group_id, "ad_group_id")
    
        ad_perf = await self.get_ad_performance_report(
            ad_group_id=ad_group_id, period=period
        )
    
        # Only ENABLED ads
        enabled_ads = [a for a in ad_perf if a.get("status") == "ENABLED"]
    
        ads_data: list[dict[str, Any]] = []
        for a in enabled_ads:
            m = a.get("metrics", {})
            impressions = int(m.get("impressions", 0))
            clicks = int(m.get("clicks", 0))
            conversions = float(m.get("conversions", 0))
            cost = float(m.get("cost", 0))
    
            ctr = clicks / impressions if impressions > 0 else 0.0
            cvr = conversions / clicks if clicks > 0 else 0.0
            cpa = cost / conversions if conversions > 0 else None
    
            # Score: ctr*cvr if CV exists, otherwise ctr only
            score = ctr * cvr if conversions > 0 else ctr
    
            entry: dict[str, Any] = {
                "ad_id": a.get("ad_id", ""),
                "impressions": impressions,
                "clicks": clicks,
                "conversions": conversions,
                "cost": cost,
                "ctr": round(ctr, 4),
                "cvr": round(cvr, 4),
                "cpa": round(cpa, 0) if cpa is not None else None,
                "score": round(score, 6),
            }
            # Include RSA information if available
            if "headlines" in a:
                entry["headlines"] = a["headlines"]
            if "descriptions" in a:
                entry["descriptions"] = a["descriptions"]
            ads_data.append(entry)
    
        # Sort by score and assign rank/verdict
        sorted_ads = sorted(ads_data, key=lambda x: x["score"], reverse=True)
        best_score = sorted_ads[0]["score"] if sorted_ads else 0.0
        ranked_ads: list[dict[str, Any]] = []
        for rank, ad in enumerate(sorted_ads, start=1):
            if ad["impressions"] < 100:
                verdict = "INSUFFICIENT_DATA"
            elif ad["score"] == best_score:
                verdict = "WINNER"
            else:
                verdict = "LOSER"
            ranked_ads.append({**ad, "rank": rank, "verdict": verdict})
    
        winner = next((a for a in ranked_ads if a.get("verdict") == "WINNER"), None)
    
        # Recommended action
        if len(ads_data) < 2:
            recommendation = (
                "Not enough ads for comparison. " "Please add more ads for A/B testing"
            )
        elif winner:
            recommendation = (
                f"Ad {winner['ad_id']} has the best performance. "
                "We recommend pausing LOSER ads and testing new variations"
            )
        else:
            recommendation = (
                "Continue testing until sufficient data has been accumulated"
            )
    
        # Insights
        insights: list[str] = []
        insufficient = [
            a for a in ranked_ads if a.get("verdict") == "INSUFFICIENT_DATA"
        ]
        if insufficient:
            insights.append(
                f"{len(insufficient)} ads have insufficient data "
                "(less than 100 impressions)"
            )
        losers = [a for a in ranked_ads if a.get("verdict") == "LOSER"]
        if losers and winner:
            insights.append(
                f"WINNER (ad {winner['ad_id']}) has a CTR of {winner['ctr']:.2%}, "
                f"outperforming {len(losers)} other ads"
            )
    
        return {
            "ad_group_id": ad_group_id,
            "period": period,
            "ads": ranked_ads,
            "winner": winner,
            "recommendation": recommendation,
            "insights": insights,
        }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Details output structure, threshold for INSUFFICIENT_DATA (impressions < 100), tie-breaking logic, and scoring formula. Explicitly says read-only. Lacks error handling details but comprehensive for a read-only tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single paragraph with front-loaded purpose, followed by output structure, conditions, read-only note, and sibling references. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Describes input, output, and behavior in detail. No output schema, but the description provides the return structure. Sibling tools are referenced. Could be more complete on error scenarios, but sufficient for most use cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema descriptions already cover parameters well (100% coverage). Description adds value by explaining period usage context (shorter windows for recent changes, LAST_90_DAYS for trends) and default period.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it ranks ENABLED ads within a single ad group and assigns verdicts. It clearly distinguishes from siblings like google_ads_ad_performance_report and google_ads_rsa_assets_analyze.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use (rank ads in one ad group) and when-not-to-use (cross-ad-group or RSA assets) by naming alternative tools. Also states it is read-only, indicating no destructive actions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/logly/mureo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server