Skip to main content
Glama

google_ads_auction_insights_analyze

Analyze campaign impression-share metrics and get human-readable insights about competitive position. Highlights when search impression share is low, rank or budget losses are high, or top-of-page share is limited.

Instructions

Interpret a campaign's impression-share metrics and surface human-readable insights about competitive position. Returns {campaign_id, campaign_name, period, impression_share_metrics:{search_impression_share, search_rank_lost_is, search_budget_lost_is, search_top_is, search_abs_top_is, note}, insights:[strings], note}. Each impression-share value is a percentage (0-100, rounded to 1 decimal) or None. Insights fire when IS < 50/70%, rank-lost > 20%, budget-lost > 20%, or abs-top-IS < 20%. Read-only. Note: Google Ads API v23 removed competitor-level auction_insight (domain overlap, outranking share); only impression-share proxies are returned. For the raw metrics without insights use google_ads_auction_insights_get; full competitor data is only available in the Google Ads UI.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
customer_idNoGoogle Ads customer ID as a 10-digit string without dashes (e.g. '1234567890'). Optional — falls back to GOOGLE_ADS_CUSTOMER_ID / GOOGLE_ADS_LOGIN_CUSTOMER_ID from the configured credentials when omitted.
campaign_idYesCampaign ID as a numeric string without dashes (e.g. '23743184133'). Obtain via google_ads_campaigns_list.
periodNoReporting window for the metrics. Default 'LAST_30_DAYS'. Use a shorter window (LAST_7_DAYS / LAST_14_DAYS) when diagnosing recent changes; use LAST_90_DAYS for trend baselines.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It explicitly states 'Read-only' and explains that competitor-level data is no longer available via the API, only impression-share proxies. It details the insight fire conditions. While comprehensive, it does not mention rate limits or specific permissions required, though the read-only nature suggests safety. A score of 4 reflects good transparency with minor gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively lengthy but well-structured and front-loaded with the tool's purpose. Each sentence adds value, including the return structure, insight thresholds, API limitation context, and sibling reference. While dense, it is concise for the complexity of the tool. A score of 4 reflects very good structure and efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description explicitly lists the return structure and notes that impression-share values can be None. It explains the conditions under which insights fire and references the API limitation and sibling tool. All three parameters are documented in the schema, and the description adds meaningful context about the metrics and usage. The tool is well-documented for its complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds some usage guidance for the 'period' parameter (e.g., shorter windows for recent changes) but does not further elaborate on other parameters. Since the schema already provides sufficient semantics, the description adds marginal value beyond guidance, resulting in a baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool interprets impression-share metrics and surfaces human-readable insights. It distinguishes from sibling google_ads_auction_insights_get which returns raw metrics without insights. The verb 'Interpret' and resource 'campaign impression-share metrics' are specific, and the differentiation from alternatives is explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool versus the sibling: 'For the raw metrics without insights use google_ads_auction_insights_get; full competitor data is only available in the Google Ads UI.' It also provides guidance on period selection: 'Use a shorter window (LAST_7_DAYS / LAST_14_DAYS) when diagnosing recent changes; use LAST_90_DAYS for trend baselines.' This offers clear context and alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/logly/mureo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server