Skip to main content
Glama

google_ads_performance_report

Retrieve campaign-level performance metrics (impressions, clicks, conversions, cost) from Google Ads over a chosen reporting period. Use this report for aggregated campaign totals; for per-ad or network breakdowns, refer to other tools.

Instructions

Aggregate campaign-level performance metrics for a Google Ads account over a reporting window. Returns one row per campaign shaped as {campaign_id, campaign_name, metrics}, where the metrics object contains impressions, clicks, cost_micros, cost (currency-formatted), conversions, ctr, average_cpc_micros, average_cpc, cost_per_conversion_micros, and cost_per_conversion. Read-only; no mutation. Use this for campaign-level totals. For per-ad breakdowns use google_ads_ad_performance_report; for Google Search vs. Search Partners splits use google_ads_network_performance_report; for query-level detail use google_ads_search_terms_report; for conversion-action slicing use google_ads_conversions_performance.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
customer_idNoGoogle Ads customer ID as a 10-digit string without dashes (e.g. '1234567890'). Optional — falls back to GOOGLE_ADS_CUSTOMER_ID / GOOGLE_ADS_LOGIN_CUSTOMER_ID from the configured credentials when omitted.
campaign_idNoRestrict the report to a single campaign by numeric ID (e.g. '23743184133'). Omit to aggregate across every campaign in the account.
periodNoReporting window for the metrics. Default 'LAST_30_DAYS'. Use a shorter window (LAST_7_DAYS / LAST_14_DAYS) when diagnosing recent changes; use LAST_90_DAYS for trend baselines.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It clearly states 'Read-only; no mutation,' which is a key behavioral trait. However, it does not disclose any other potential behaviors such as rate limits, authentication requirements, or performance considerations. While the read-only disclosure is important, some additional context (e.g., 'aggregate data may take time to compute') could further aid agent decision-making.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured paragraph that starts with the core purpose, lists the returned metrics, and ends with sibling tool comparisons. Every sentence contributes necessary information without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (read-only, aggregate report with three optional parameters) and the absence of an output schema, the description sufficiently covers the return format, all metrics, and parameter behavior. It provides all context needed for an agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100% with descriptions for all three parameters. The description adds extra value by providing usage recommendations for the 'period' parameter (e.g., 'Use a shorter window... when diagnosing recent changes; use LAST_90_DAYS for trend baselines') and explaining the fallback behavior for 'customer_id'. This enriches the schema's minimal descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it aggregates campaign-level performance metrics for a Google Ads account, specifies the return shape with campaign_id, campaign_name, and a metrics object containing numerous fields. It explicitly distinguishes from sibling tools by naming them and their purposes (e.g., google_ads_ad_performance_report, google_ads_network_performance_report).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit guidance on when to use this tool ('for campaign-level totals') and when not to, providing direct alternatives for per-ad breakdowns, network splits, query-level detail, and conversion-action slicing. This effectively helps an agent choose correctly among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/logly/mureo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server