Skip to main content
Glama
ZLeventer

google-ads-mcp

gads_rsa_asset_performance

Analyze asset-level performance for RSA headlines and descriptions using labels (BEST, GOOD, LOW, PENDING, LEARNING). Determine which assets to keep, test, or replace to improve ad performance.

Instructions

Asset-level performance labels (BEST / GOOD / LOW / PENDING / LEARNING) for RSA headlines and descriptions. Identifies which assets to keep, test, or replace.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
customer_idNoOverride GOOGLE_ADS_CUSTOMER_ID for this call
ad_group_idNoFilter to a specific ad group ID
date_rangeNoGAQL date range literal, e.g. LAST_30_DAYS, LAST_7_DAYS, LAST_MONTHLAST_30_DAYS

Implementation Reference

  • src/index.ts:142-147 (registration)
    Tool registration for 'gads_rsa_asset_performance' using McpServer.tool(), binding the rsaAssetPerformance handler with its schema.
    server.tool(
      "gads_rsa_asset_performance",
      "Asset-level performance labels (BEST / GOOD / LOW / PENDING / LEARNING) for RSA headlines and descriptions. Identifies which assets to keep, test, or replace.",
      rsaAssetPerformanceSchema,
      async (args) => { try { return ok(await rsaAssetPerformance(args)); } catch (e) { return err(e); } }
    );
  • Input schema for rsaAssetPerformance tool: customer_id (optional), ad_group_id (optional), date_range (default LAST_30_DAYS).
    export const rsaAssetPerformanceSchema = {
      customer_id: z.string().optional().describe("Override GOOGLE_ADS_CUSTOMER_ID for this call"),
      ad_group_id: z.string().optional().describe("Filter to a specific ad group ID"),
      date_range: z.string().default("LAST_30_DAYS").describe("GAQL date range literal, e.g. LAST_30_DAYS, LAST_7_DAYS, LAST_MONTH"),
    };
  • Main handler function rsaAssetPerformance. Executes a GAQL query on ad_group_ad_asset_view to get asset-level performance labels (BEST/GOOD/LOW/PENDING/LEARNING) for RSA headlines and descriptions, filtered by ad_group_id and date_range.
    export async function rsaAssetPerformance(args: z.infer<z.ZodObject<typeof rsaAssetPerformanceSchema>>) {
      const customer = getCustomer(args.customer_id);
      const adGroupClause = args.ad_group_id ? `AND ad_group.id = ${args.ad_group_id}` : "";
      const rows = await customer.query(`
        SELECT
          campaign.name,
          ad_group.id,
          ad_group.name,
          ad_group_ad.ad.id,
          asset.type,
          asset.text_asset.text,
          ad_group_ad_asset_view.field_type,
          ad_group_ad_asset_view.performance_label,
          ad_group_ad_asset_view.pinned_field,
          ad_group_ad_asset_view.enabled
        FROM ad_group_ad_asset_view
        WHERE segments.date DURING ${args.date_range}
          AND asset.type = 'TEXT'
          ${adGroupClause}
        ORDER BY ad_group_ad_asset_view.performance_label ASC
        LIMIT 500
      `);
      return { rowCount: rows.length, rows };
    }
  • Imports zod for schema validation and getCustomer helper from client.ts for authenticated Google Ads API access.
    import { z } from "zod";
    import { getCustomer } from "../client.js";
  • getCustomer helper used by rsaAssetPerformance to obtain an authenticated Customer object using environment variables (developer token, client ID/secret, refresh token, customer ID).
    export function getCustomer(override?: string): Customer {
      const refresh_token = process.env.GOOGLE_ADS_REFRESH_TOKEN;
      if (!refresh_token) throw new GoogleAdsError("GOOGLE_ADS_REFRESH_TOKEN is not set");
      const customer_id = (override ?? process.env.GOOGLE_ADS_CUSTOMER_ID ?? "").replace(/-/g, "");
      if (!customer_id) throw new GoogleAdsError("GOOGLE_ADS_CUSTOMER_ID is not set and no customer_id was passed");
      const login_customer_id = process.env.GOOGLE_ADS_LOGIN_CUSTOMER_ID?.replace(/-/g, "") || undefined;
      return getApi().Customer({ customer_id, login_customer_id, refresh_token });
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so the description fully bears the transparency burden. It states the tool outputs performance labels but does not disclose behavioral traits such as data freshness, rate limits, authentication requirements, or whether the operation is read-only. The description is minimal beyond purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, efficiently conveying the core function and value. Every word contributes to understanding, with no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description does not detail the return format or structure of the asset labels. It provides a high-level interpretation guide ('keep, test, or replace') but lacks specifics on how the data is returned, which may limit agent understanding for programmatic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the parameter descriptions are fully provided in the schema. The tool description does not add additional meaning or context to the parameters beyond their definitions, resulting in no incremental value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides asset-level performance labels (BEST/GOOD/LOW/PENDING/LEARNING) for RSA headlines and descriptions, specifying the purpose of identifying which assets to keep, test, or replace. It distinguishes itself from siblings like gads_list_assets by focusing on performance labels rather than listing assets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when needing RSA asset performance labels, but does not explicitly state when to use this tool versus alternatives like gads_keyword_performance or gads_ad_group_performance. No exclusions or alternative tool references are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ZLeventer/google-ads-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server