meta_ads_split_tests_create
Create Meta Ads split tests to compare ad sets on performance metrics. Define test cells, set duration and confidence level, and let Meta determine the winning variant.
Instructions
Creates a new Split Test. Returns the new study_id. Mutating, reversible via rollback_apply (rollback ends the test immediately without declaring a winner). Meta runs the test for the configured duration, then compares cells on the chosen objective (COST_PER_RESULT / CONVERSIONS / REACH / CPC / CPM). Cells must reference pre-existing ad sets; this tool does not create ad sets. For test analysis post-conclusion use meta_ads_split_tests_get.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| account_id | Yes | Meta Ads account ID in the format 'act_XXXXXXXXXX' (e.g. 'act_1234567890'). Optional — falls back to META_ADS_ACCOUNT_ID from the configured credentials. The leading 'act_' prefix is required. | |
| name | Yes | Test name shown in Experiments. Should describe the hypothesis being tested. | |
| cells | Yes | Test cells (2 or more). Each cell has {name, adsets: [ad_set_id, ...]}. Meta splits traffic evenly across cells. | |
| objectives | Yes | Metrics Meta will use to rank cells. Each entry is {type: COST_PER_RESULT | CONVERSIONS | REACH | CPC | CPM}. Multiple objectives produce multi-dimensional results. | |
| start_time | Yes | Test start in ISO 8601 (e.g. '2026-04-25T00:00:00+0900'). Must be in the future when the test is created. | |
| end_time | Yes | Test end in ISO 8601. Meta requires at least 4 days between start_time and end_time for statistical significance. | |
| confidence_level | No | Statistical confidence threshold for declaring a winner. Default 95 (95%). Higher values need more spend / longer duration to conclude. | |
| description | No | Free-text description of the hypothesis. Internal — not shown to end users. |