Skip to main content
Glama
johnoconnor0

Google Ads MCP Server

by johnoconnor0

google_ads_auto_apply_safe_recommendations

Automatically apply low-risk Google Ads recommendations like keyword match upgrades and responsive search ad suggestions. Use dry_run to preview changes before applying.

Instructions

Auto-apply low-risk, high-impact recommendations.

This tool identifies "safe" recommendations that are unlikely to negatively impact performance and applies them automatically. Safe recommendations include:

  • Keyword match type upgrades (exact → phrase → broad)

  • Responsive search ad suggestions

  • Search partners opt-in

  • Optimize ad rotation

Higher risk recommendations (budget increases, bidding strategy changes) are excluded and should be reviewed manually.

Args: customer_id: Customer ID (without hyphens) dry_run: If True, shows what would be applied without actually applying (default: True)

Returns: List of recommendations that were (or would be) applied

Example: # Preview what would be applied google_ads_auto_apply_safe_recommendations( customer_id="1234567890", dry_run=True )

# Actually apply the recommendations
google_ads_auto_apply_safe_recommendations(
    customer_id="1234567890",
    dry_run=False
)

Warning: Even "safe" recommendations can impact performance. Use dry_run=True first to review what would be applied.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
customer_idYes
dry_runNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It explains that the tool applies recommendations automatically, excludes high-risk ones, and includes a warning about potential performance impact. It also describes the dry_run parameter's behavior. It could be more explicit about the mutation effect, but the description is transparent enough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement, bullet-list of safe recommendations, parameter explanations with examples, and a warning. Every sentence is essential and contributes to understanding, with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (2 parameters, output schema exists), the description covers all necessary aspects: what the tool does, what recommendations are included/excluded, how to use dry_run, and a cautionary note. The examples further clarify usage. It is complete for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must compensate. It provides clear semantics for both parameters: customer_id format (without hyphens) and dry_run (preview vs. apply, default true). The example also demonstrates usage, adding significant value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool auto-applies low-risk, high-impact recommendations, and lists specific safe recommendations (keyword match type upgrades, responsive search ad suggestions, etc.) and explicitly excludes higher-risk ones (budget increases, bidding strategy changes). This differentiates it from other apply tools among siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use the tool (for safe recommendations) and when not to (higher-risk recommendations should be reviewed manually). It also recommends using dry_run=True first. However, it does not explicitly name alternative sibling tools for manual review.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/johnoconnor0/google-ads-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server