Skip to main content
Glama

meta_ads_split_tests_create

Create Meta Ads split tests to compare ad sets on performance metrics. Define test cells, set duration and confidence level, and let Meta determine the winning variant.

Instructions

Creates a new Split Test. Returns the new study_id. Mutating, reversible via rollback_apply (rollback ends the test immediately without declaring a winner). Meta runs the test for the configured duration, then compares cells on the chosen objective (COST_PER_RESULT / CONVERSIONS / REACH / CPC / CPM). Cells must reference pre-existing ad sets; this tool does not create ad sets. For test analysis post-conclusion use meta_ads_split_tests_get.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
account_idYesMeta Ads account ID in the format 'act_XXXXXXXXXX' (e.g. 'act_1234567890'). Optional — falls back to META_ADS_ACCOUNT_ID from the configured credentials. The leading 'act_' prefix is required.
nameYesTest name shown in Experiments. Should describe the hypothesis being tested.
cellsYesTest cells (2 or more). Each cell has {name, adsets: [ad_set_id, ...]}. Meta splits traffic evenly across cells.
objectivesYesMetrics Meta will use to rank cells. Each entry is {type: COST_PER_RESULT | CONVERSIONS | REACH | CPC | CPM}. Multiple objectives produce multi-dimensional results.
start_timeYesTest start in ISO 8601 (e.g. '2026-04-25T00:00:00+0900'). Must be in the future when the test is created.
end_timeYesTest end in ISO 8601. Meta requires at least 4 days between start_time and end_time for statistical significance.
confidence_levelNoStatistical confidence threshold for declaring a winner. Default 95 (95%). Higher values need more spend / longer duration to conclude.
descriptionNoFree-text description of the hypothesis. Internal — not shown to end users.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses that the tool is mutating, reversible via rollback_apply, runs for configured duration, and compares cells on specified objectives. It also states returns study_id. While it omits specific auth requirements and rate limits, the provided details are sufficient for understanding core behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four sentences, each providing distinct value: creation purpose, return value, behavioral notes (mutating/reversible), test mechanism, prerequisite, and pointer to sibling tool. No wasted words, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description covers the return value. It explains behavioral details, prerequisites, and directs to the analysis tool. It might lack error handling notes, but overall it provides sufficient context for a create tool with well-described parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and each parameter has a description. The tool description adds general context (e.g., cells must reference existing ad sets, return of study_id) but does not significantly enhance parameter understanding beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a new Split Test, returns study_id, and distinguishes from sibling tools like meta_ads_split_tests_get and meta_ads_split_tests_end by specifying post-conclusion analysis use. It also clarifies that ad sets are not created, preempting confusion with ad set creation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit context: when to create a split test, prerequisites (pre-existing ad sets for cells), reversibility via rollback_apply, and a pointer to the get tool for analysis after conclusion. This effectively guides appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/logly/mureo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server