Skip to main content
Glama

google_ads_auction_insights_get

Retrieve raw impression-share metrics for a Google Ads campaign including search impression share, rank lost, budget lost, top and absolute top impression share. Returns percentages or error details for auction insights analysis.

Instructions

Fetch raw impression-share metrics for one Google Ads campaign. Returns a list with a single entry: {campaign_id, campaign_name, search_impression_share, search_rank_lost_is, search_budget_lost_is, search_top_is, search_abs_top_is, note} — every IS field is a percentage (0-100, float, rounded to 1 decimal) or None. On failure returns a single-element list with {error:'auction_insights_unavailable'|'no_data', reason, hint}. Read-only. Note: Google Ads API v23 removed competitor-level auction_insight (domain, overlap, outranking); only impression-share proxies are returned. For a version with human-readable insights layered on top use google_ads_auction_insights_analyze.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
customer_idNoGoogle Ads customer ID as a 10-digit string without dashes (e.g. '1234567890'). Optional — falls back to GOOGLE_ADS_CUSTOMER_ID / GOOGLE_ADS_LOGIN_CUSTOMER_ID from the configured credentials when omitted.
campaign_idYesCampaign ID as a numeric string without dashes (e.g. '23743184133'). Obtain via google_ads_campaigns_list.
periodNoReporting window for the metrics. Default 'LAST_30_DAYS'. Use a shorter window (LAST_7_DAYS / LAST_14_DAYS) when diagnosing recent changes; use LAST_90_DAYS for trend baselines.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Given no annotations, the description fully covers behavior: it declares read-only, details success and failure return formats (including specific error types), and notes API limitations (v23 removed competitor data). This is comprehensive for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a well-structured paragraph, starting with purpose and output, then failure modes, read-only status, API note, and sibling reference. Every sentence adds value, and there is no unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Considering the tool has 3 parameters, no output schema, and many sibling tools, the description covers purpose, return format (with field names), error handling, read-only flag, API version impact, and a pointer to the alternative tool. This is complete for an agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% parameter description coverage, so the baseline is 3. The tool description does not add meaning to parameters beyond what the schema already provides; it focuses on the return format and limitations. No additional parameter context is given.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Fetch raw impression-share metrics for one Google Ads campaign', specifying a specific verb and resource. It distinguishes itself from the sibling tool google_ads_auction_insights_analyze by noting that this tool returns raw metrics while the sibling provides human-readable insights.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides good context on when to use this tool versus the analyze version, explicitly pointing to the sibling for layered insights. It also indirectly implies usage for raw data needs. However, it does not explicitly state when not to use it or provide exclusions, which keeps it from a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/logly/mureo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server