Skip to main content
Glama
johnoconnor0

Google Ads MCP Server

by johnoconnor0

google_ads_performance_insights

Analyze Google Ads campaigns, ad groups, keywords, or ads to identify performance issues like low CTR, conversion rates, or impression share, and receive AI-driven recommendations for improvement.

Instructions

Generate AI-powered performance insights for campaigns, ad groups, keywords, or ads.

Analyzes performance metrics and provides actionable recommendations for:

  • Low CTR (below industry benchmarks)

  • Low conversion rates

  • Low impression share

  • Low quality scores

  • High performers worthy of increased budget

Args: customer_id: Google Ads customer ID (10 digits, no hyphens) entity_type: Entity to analyze - CAMPAIGN, AD_GROUP, KEYWORD, or AD entity_id: Optional specific entity ID (if not provided, analyzes all) date_range: Date range (LAST_7_DAYS, LAST_30_DAYS, LAST_90_DAYS, THIS_MONTH, LAST_MONTH)

Returns: Performance insights with AI-generated recommendations

Example: google_ads_performance_insights( customer_id="1234567890", entity_type="CAMPAIGN", date_range="LAST_30_DAYS" )

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
customer_idYes
entity_typeNoCAMPAIGN
entity_idNo
date_rangeNoLAST_30_DAYS

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The register_insights_tools function that registers the google_ads_performance_insights tool via the @mcp.tool() decorator on line 31. This is how the tool gets registered into the MCP server.
    def register_insights_tools(mcp):
        """Register all insights and recommendations MCP tools."""
    
        @mcp.tool()
        def google_ads_performance_insights(
            customer_id: str,
            entity_type: str = "CAMPAIGN",
            entity_id: Optional[str] = None,
            date_range: str = "LAST_30_DAYS"
        ) -> str:
            """Generate AI-powered performance insights for campaigns, ad groups, keywords, or ads.
    
            Analyzes performance metrics and provides actionable recommendations for:
            - Low CTR (below industry benchmarks)
            - Low conversion rates
            - Low impression share
            - Low quality scores
            - High performers worthy of increased budget
    
            Args:
                customer_id: Google Ads customer ID (10 digits, no hyphens)
                entity_type: Entity to analyze - CAMPAIGN, AD_GROUP, KEYWORD, or AD
                entity_id: Optional specific entity ID (if not provided, analyzes all)
                date_range: Date range (LAST_7_DAYS, LAST_30_DAYS, LAST_90_DAYS, THIS_MONTH, LAST_MONTH)
    
            Returns:
                Performance insights with AI-generated recommendations
    
            Example:
                google_ads_performance_insights(
                    customer_id="1234567890",
                    entity_type="CAMPAIGN",
                    date_range="LAST_30_DAYS"
                )
            """
            with performance_logger.track_operation('performance_insights', customer_id=customer_id):
                try:
                    client = get_auth_manager().get_client()
                    insights_manager = InsightsManager(client)
    
                    result = insights_manager.get_performance_insights(
                        customer_id=customer_id,
                        entity_type=entity_type,
                        entity_id=entity_id,
                        date_range=date_range
                    )
    
                    if 'error' in result:
                        return f"❌ {result['error']}"
    
                    audit_logger.log_api_call(
                        customer_id=customer_id,
                        operation='get_performance_insights',
                        entity_type=entity_type,
                        status='success'
                    )
    
                    # Format response
                    output = f"# 🔍 Performance Insights Report\n\n"
                    output += f"**Entity Type**: {result['entity_type']}\n"
                    output += f"**Total Analyzed**: {result['total_analyzed']}\n"
                    output += f"**Insights Found**: {result['insights_count']}\n\n"
    
                    if result['insights_count'] == 0:
                        output += "✅ **All entities are performing within expected ranges!**\n\n"
                        output += "No major issues detected. Continue monitoring performance.\n"
                        return output
    
                    output += "---\n\n"
    
                    # Group insights by severity
                    high_severity = [i for i in result['insights'] if any(
                        insight['severity'] == 'HIGH' for insight in i['insights']
                    )]
                    medium_severity = [i for i in result['insights'] if any(
                        insight['severity'] == 'MEDIUM' for insight in i['insights']
                    ) and i not in high_severity]
                    positive = [i for i in result['insights'] if any(
                        insight['severity'] == 'POSITIVE' for insight in i['insights']
                    )]
    
                    # High priority issues
                    if high_severity:
                        output += "## 🚨 High Priority Issues\n\n"
                        for entity in high_severity[:5]:  # Top 5
                            output += f"### {entity['entity_name']}\n"
                            output += f"**Cost**: ${entity['metrics']['cost']:,.2f} | "
                            output += f"**Conversions**: {entity['metrics']['conversions']}\n\n"
    
                            for insight in entity['insights']:
                                if insight['severity'] == 'HIGH':
                                    output += f"**⚠️ {insight['type'].replace('_', ' ').title()}**\n"
                                    output += f"- {insight['message']}\n"
                                    output += f"- 💡 *{insight['recommendation']}*\n\n"
    
                    # Medium priority issues
                    if medium_severity:
                        output += "## ⚡ Medium Priority Opportunities\n\n"
                        for entity in medium_severity[:3]:  # Top 3
                            output += f"### {entity['entity_name']}\n"
                            for insight in entity['insights']:
                                if insight['severity'] == 'MEDIUM':
                                    output += f"- {insight['message']}\n"
                                    output += f"  💡 *{insight['recommendation']}*\n\n"
    
                    # Positive performers
                    if positive:
                        output += "## ✨ Top Performers\n\n"
                        for entity in positive[:3]:  # Top 3
                            output += f"### {entity['entity_name']}\n"
                            output += f"**Cost**: ${entity['metrics']['cost']:,.2f} | "
                            output += f"**CTR**: {entity['metrics']['ctr']:.2%}\n"
                            for insight in entity['insights']:
                                if insight['severity'] == 'POSITIVE':
                                    output += f"- ✅ {insight['message']}\n"
                                    output += f"  💡 *{insight['recommendation']}*\n\n"
    
                    output += "---\n\n"
                    output += "💡 **Next Steps**: Prioritize high-severity issues first, then explore medium-priority opportunities.\n"
    
                    return output
    
                except Exception as e:
                    error_msg = ErrorHandler.handle_error(e, context="performance_insights")
                    return f"❌ Failed to generate performance insights: {error_msg}"
  • The main handler function 'google_ads_performance_insights' that receives customer_id, entity_type, entity_id, and date_range parameters, delegates to InsightsManager.get_performance_insights(), and formats the response as a markdown string with grouped insights by severity (high/medium/positive).
    def google_ads_performance_insights(
        customer_id: str,
        entity_type: str = "CAMPAIGN",
        entity_id: Optional[str] = None,
        date_range: str = "LAST_30_DAYS"
    ) -> str:
        """Generate AI-powered performance insights for campaigns, ad groups, keywords, or ads.
    
        Analyzes performance metrics and provides actionable recommendations for:
        - Low CTR (below industry benchmarks)
        - Low conversion rates
        - Low impression share
        - Low quality scores
        - High performers worthy of increased budget
    
        Args:
            customer_id: Google Ads customer ID (10 digits, no hyphens)
            entity_type: Entity to analyze - CAMPAIGN, AD_GROUP, KEYWORD, or AD
            entity_id: Optional specific entity ID (if not provided, analyzes all)
            date_range: Date range (LAST_7_DAYS, LAST_30_DAYS, LAST_90_DAYS, THIS_MONTH, LAST_MONTH)
    
        Returns:
            Performance insights with AI-generated recommendations
    
        Example:
            google_ads_performance_insights(
                customer_id="1234567890",
                entity_type="CAMPAIGN",
                date_range="LAST_30_DAYS"
            )
        """
        with performance_logger.track_operation('performance_insights', customer_id=customer_id):
            try:
                client = get_auth_manager().get_client()
                insights_manager = InsightsManager(client)
    
                result = insights_manager.get_performance_insights(
                    customer_id=customer_id,
                    entity_type=entity_type,
                    entity_id=entity_id,
                    date_range=date_range
                )
    
                if 'error' in result:
                    return f"❌ {result['error']}"
    
                audit_logger.log_api_call(
                    customer_id=customer_id,
                    operation='get_performance_insights',
                    entity_type=entity_type,
                    status='success'
                )
    
                # Format response
                output = f"# 🔍 Performance Insights Report\n\n"
                output += f"**Entity Type**: {result['entity_type']}\n"
                output += f"**Total Analyzed**: {result['total_analyzed']}\n"
                output += f"**Insights Found**: {result['insights_count']}\n\n"
    
                if result['insights_count'] == 0:
                    output += "✅ **All entities are performing within expected ranges!**\n\n"
                    output += "No major issues detected. Continue monitoring performance.\n"
                    return output
    
                output += "---\n\n"
    
                # Group insights by severity
                high_severity = [i for i in result['insights'] if any(
                    insight['severity'] == 'HIGH' for insight in i['insights']
                )]
                medium_severity = [i for i in result['insights'] if any(
                    insight['severity'] == 'MEDIUM' for insight in i['insights']
                ) and i not in high_severity]
                positive = [i for i in result['insights'] if any(
                    insight['severity'] == 'POSITIVE' for insight in i['insights']
                )]
    
                # High priority issues
                if high_severity:
                    output += "## 🚨 High Priority Issues\n\n"
                    for entity in high_severity[:5]:  # Top 5
                        output += f"### {entity['entity_name']}\n"
                        output += f"**Cost**: ${entity['metrics']['cost']:,.2f} | "
                        output += f"**Conversions**: {entity['metrics']['conversions']}\n\n"
    
                        for insight in entity['insights']:
                            if insight['severity'] == 'HIGH':
                                output += f"**⚠️ {insight['type'].replace('_', ' ').title()}**\n"
                                output += f"- {insight['message']}\n"
                                output += f"- 💡 *{insight['recommendation']}*\n\n"
    
                # Medium priority issues
                if medium_severity:
                    output += "## ⚡ Medium Priority Opportunities\n\n"
                    for entity in medium_severity[:3]:  # Top 3
                        output += f"### {entity['entity_name']}\n"
                        for insight in entity['insights']:
                            if insight['severity'] == 'MEDIUM':
                                output += f"- {insight['message']}\n"
                                output += f"  💡 *{insight['recommendation']}*\n\n"
    
                # Positive performers
                if positive:
                    output += "## ✨ Top Performers\n\n"
                    for entity in positive[:3]:  # Top 3
                        output += f"### {entity['entity_name']}\n"
                        output += f"**Cost**: ${entity['metrics']['cost']:,.2f} | "
                        output += f"**CTR**: {entity['metrics']['ctr']:.2%}\n"
                        for insight in entity['insights']:
                            if insight['severity'] == 'POSITIVE':
                                output += f"- ✅ {insight['message']}\n"
                                output += f"  💡 *{insight['recommendation']}*\n\n"
    
                output += "---\n\n"
                output += "💡 **Next Steps**: Prioritize high-severity issues first, then explore medium-priority opportunities.\n"
    
                return output
    
            except Exception as e:
                error_msg = ErrorHandler.handle_error(e, context="performance_insights")
                return f"❌ Failed to generate performance insights: {error_msg}"
  • The InsightsManager.get_performance_insights() method that builds GAQL queries based on entity type (CAMPAIGN, AD_GROUP, KEYWORD, AD), fetches performance metrics from Google Ads API, generates insights on CTR, conversion rate, impression share, and quality score compared to benchmarks, and returns structured insights with severity levels.
    def get_performance_insights(
        self,
        customer_id: str,
        entity_type: str = "CAMPAIGN",
        entity_id: Optional[str] = None,
        date_range: str = "LAST_30_DAYS"
    ) -> Dict[str, Any]:
        """Generate AI-powered performance insights.
    
        Args:
            customer_id: Customer ID (without hyphens)
            entity_type: CAMPAIGN, AD_GROUP, KEYWORD, or AD
            entity_id: Optional specific entity ID
            date_range: Date range for analysis
    
        Returns:
            Performance insights with recommendations
        """
        ga_service = self.client.get_service("GoogleAdsService")
    
        # Build query based on entity type
        entity_map = {
            "CAMPAIGN": "campaign",
            "AD_GROUP": "ad_group",
            "KEYWORD": "ad_group_criterion",
            "AD": "ad_group_ad"
        }
    
        entity = entity_map.get(entity_type.upper(), "campaign")
    
        query = f"""
            SELECT
                {entity}.id,
                {entity}.name,
                metrics.impressions,
                metrics.clicks,
                metrics.ctr,
                metrics.cost_micros,
                metrics.conversions,
                metrics.conversions_value,
                metrics.cost_per_conversion,
                metrics.search_impression_share,
                metrics.quality_score
            FROM {entity}
            WHERE segments.date DURING {date_range}
        """
    
        if entity_id:
            query += f" AND {entity}.id = {entity_id}"
    
        query += " ORDER BY metrics.cost_micros DESC LIMIT 100"
    
        response = ga_service.search(customer_id=customer_id, query=query)
    
        insights = []
        for row in response:
            entity_obj = getattr(row, entity)
            metrics = row.metrics
    
            # Calculate performance scores
            ctr_benchmark = 0.02  # 2% industry average
            cvr_benchmark = 0.05  # 5% industry average
    
            ctr = metrics.ctr
            cvr = metrics.conversions / metrics.clicks if metrics.clicks > 0 else 0
    
            # Generate insights
            entity_insights = {
                'entity_id': str(entity_obj.id),
                'entity_name': entity_obj.name if hasattr(entity_obj, 'name') else 'N/A',
                'metrics': {
                    'impressions': metrics.impressions,
                    'clicks': metrics.clicks,
                    'ctr': ctr,
                    'cost': metrics.cost_micros / 1_000_000,
                    'conversions': metrics.conversions,
                    'cost_per_conversion': metrics.cost_per_conversion
                },
                'insights': []
            }
    
            # CTR insights
            if ctr < ctr_benchmark * 0.5:
                entity_insights['insights'].append({
                    'type': 'LOW_CTR',
                    'severity': 'HIGH',
                    'message': f'CTR ({ctr:.2%}) is significantly below benchmark ({ctr_benchmark:.2%})',
                    'recommendation': 'Review ad copy and targeting. Consider testing new ad variations.'
                })
            elif ctr > ctr_benchmark * 1.5:
                entity_insights['insights'].append({
                    'type': 'HIGH_CTR',
                    'severity': 'POSITIVE',
                    'message': f'CTR ({ctr:.2%}) is performing well above benchmark',
                    'recommendation': 'Consider increasing budget to capture more traffic.'
                })
    
            # Conversion rate insights
            if cvr < cvr_benchmark * 0.5 and metrics.clicks > 50:
                entity_insights['insights'].append({
                    'type': 'LOW_CONVERSION_RATE',
                    'severity': 'HIGH',
                    'message': f'Conversion rate ({cvr:.2%}) is below expected level',
                    'recommendation': 'Review landing page experience and conversion funnel.'
                })
    
            # Impression share insights
            if hasattr(metrics, 'search_impression_share'):
                is_value = metrics.search_impression_share
                if is_value < 0.5:
                    entity_insights['insights'].append({
                        'type': 'LOW_IMPRESSION_SHARE',
                        'severity': 'MEDIUM',
                        'message': f'Only capturing {is_value:.0%} of available impressions',
                        'recommendation': 'Increase budget or improve ad rank to capture more impressions.'
                    })
    
            # Quality score insights
            if hasattr(metrics, 'quality_score') and metrics.quality_score < 5:
                entity_insights['insights'].append({
                    'type': 'LOW_QUALITY_SCORE',
                    'severity': 'HIGH',
                    'message': f'Quality Score ({metrics.quality_score}/10) needs improvement',
                    'recommendation': 'Improve ad relevance, expected CTR, and landing page experience.'
                })
    
            if entity_insights['insights']:
                insights.append(entity_insights)
    
        return {
            'entity_type': entity_type,
            'total_analyzed': len(list(response)),
            'insights_count': len(insights),
            'insights': insights
        }
  • Entry in the _TOOL_MODULES list that maps the 'insights' label to the register_insights_tools function, which is called during server startup in _register_all_modular_tools() on line 554.
        ("insights",      "tools.insights.mcp_tools_insights",           "register_insights_tools"),
        ("batch",         "tools.batch.mcp_tools_batch",                 "register_batch_tools"),
        ("shopping_pmax", "tools.shopping_pmax.mcp_tools_shopping_pmax", "register_shopping_pmax_tools"),
        ("extensions",    "tools.extensions.mcp_tools_extensions",       "register_extension_tools"),
        ("local_app",     "tools.local_app.mcp_tools_local_app",         "register_local_app_tools"),
    ]
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It states the tool generates insights and recommendations, suggesting a read-only operation. However, it does not explicitly confirm whether mutations occur, nor does it mention permissions or side effects. The behavioral profile is partially transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured: purpose summary, use-case list, parameter details, and example. It is slightly verbose but front-loaded and easy to scan. Every section serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers what the tool does, why to use it, and parameter details. An output schema exists, so return value explanation is unnecessary. It lacks edge-case handling or error conditions, but for a tool of this complexity, it is adequately complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must compensate. It provides detailed explanations for each parameter: customer_id format, entity_type values, optional entity_id, and date_range options. It includes an example call. This adds significant semantic value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates AI-powered performance insights and actionable recommendations for specific entity types (campaigns, ad groups, keywords, ads). It lists concrete use cases like low CTR, low conversion rates, etc. This distinguishes it from sibling tools that provide raw performance data or generic recommendations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for analyzing performance metrics and getting recommendations, but it does not explicitly state when to use this tool versus alternatives like google_ads_get_recommendations or google_ads_opportunity_finder. It describes the context but lacks exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/johnoconnor0/google-ads-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server