At the end of August 2017, Google announced a change to the ad rotation options in AdWords campaign experiments. Settings that previously allowed advertisers to select “Optimize for Clicks,” “Optimize for Conversions,” “Rotate Evenly,” or “Rotate Indefinitely” were replaced with the simplified and consolidated “Optimize” or “Rotate Indefinitely:”
This switch came with solid reasoning behind it: Google’s machine learning is growing ever more capable of matching a given ad’s message to the audience that will best receive it (optimizing for clicks).
In addition, it is assumed that post-click conversion behavior has little to do with pre-click messaging and much more to do with a user’s post-click landing page experience. Thus, pushing A.I. to maintain the same effectiveness with an “Optimize for Conversions” rotation as “Optimize for Clicks” is impossible — there are simply too many independent variables to consider. As Mary Hartman found, optimized ad rotation means more ad variety and volume.
However, for seasoned ad testers, this announcement to AdWords experiments also brought a healthy level of concern. Prior to its removal, “Optimize for Conversions” had been a useful tool for advertisers running split post-click landing page tests (using a single version of ad copy with different URLs) or those looking for the best combination of ad copy and post-click landing page experience to drive conversions.
While the loss of “Optimize for Conversions” was an initial blow to the PPC ad testing community, I am here to show how this update is actually moving the industry forward by encouraging advertisers to take advantage of both click-optimized ad messaging and comparative post-click landing page testing.
Have your cake and eat it, too
It feels a little unnatural, as a skeptic PPC-er, to believe that Google taking away one of my favorite AdWords features could be a good thing. So let me make my case to you:
With the previous ad rotation settings, we were forced to choose between either:
- Letting AdWords’ machine learning determine the “best” ad and enter that version into auctions most frequently
- Give all ads an equal chance to enter the auction, regardless of past performance
Since factors like CTR contribute to Ad Rank, the latter option often meant an increase in lost impression share, fewer clicks, and missed conversions from lower-quality ads entering the auction at an equal rate, but low ad rank preventing them from actually showing.
On the other hand, the first option frequently resulted in unnecessarily long testing cycles and frustratingly biased data when reviewing for statistical significance. The favored ad variation would be entered into auctions more frequently, giving both increased impression and click volume and more room for leeway in ad performance (more ad impressions making each lost click less significant for ad CTR).
In short, none of the ad rotation settings were truly ideal for a quality ad or post-click landing page test:
Given this, why do I claim that switching to allow only “Optimize” and “Rotate Indefinitely” rotations is a step forward for ad testing? Because Google is now forcing advertisers to start testing the way we all should have been before: with AdWords campaign drafts and experiments for clean delivery and clear results. With Experiments, tests can take advantage of optimized ad delivery (for maximum impression and click traffic), while also identifying user responses to different post-click landing page experiences.
AdWords experiments for new and improved ad testing
If you haven’t worked with AdWords experiments before, this feature allows you to create a duplicate of any Search or Display campaign (called a Draft), tweak the variables/settings you want to test, and run it alongside the original campaign while the two share budget and impression opportunities at a level you decide. post-click landing page testing is just one of the many AdWords ad experiments you can start running today.
To see Experiments in action, let’s walk through the setup of a basic split test for post-click landing pages in a Search campaign.
Step 1: Create a draft
Because all experiments stem from drafts, we first need to create the modified version of our campaign for testing. In this instance, we’ll select a campaign that we know has multiple ad copy variations that all lead to a single post-click landing page URL. This is the ideal candidate for a split post-click landing page test using Drafts and Experiments.
In the new AdWords UI, navigate to the “Drafts & experiments” tab at the bottom of the left sidebar. If other variations of the campaign have been created previously, you will see those listed upon opening the tab. Otherwise, you will see an empty drafts page with an invitation to create a new draft:
Click the blue “+” to get started and name your draft. The draft name will be different from the name of your experiment, so you can be as general or specific as you like. It is helpful to indicate the variable you will be testing in the draft name, so you can easily reference it later on:
When we hit “SAVE” on this screen, AdWords will automatically open the new draft to make any needed modifications.
Step 2: Modify your draft
To set up our post-click landing page test, we will navigate to the “Ads & extensions” tab within the draft campaign. We check the box at the top of the page and click to select all the ads. From the blue bar that appears, we choose Edit >> Change text ads:
This provides the option to Edit, Find and replace, Add to text, or Change case of the ad copy. For our test, we choose Find and replace, then enter the old post-click landing page URL (the Control) in the “Find” section and choose “in final URL” from the dropdown menu. After inserting the new post-click landing page URL (the Test) in the “replace” section, we can preview and/or apply the changes:
We now have two versions of our campaign with identical ad copy leading to two different destinations. It’s time to launch the experiment and see which page is the top performer.
Step 3: Set up the experiment
Once the draft is ready to launch, with all test variable modifications in place, we simply hit “APPLY” in the center at the top of the page. This brings up the option to apply changes directly or run an experiment. We opt to run an experiment, and now have to make a few last decisions to the campaign structure:
Naming the experiment is actually much more important than naming the draft of this campaign. Experiments show in the Campaign view in the new AdWords UI, so if you want to easily see your experiment campaign in line with its source campaign, it is best to begin your experiment name with the campaign source name. It is also useful to indicate, as mentioned, what variable is being tested (with as much specificity as you like).
For example, as we test a new variation of one of the core post-click landing pages in our Branded campaign, we use the name “Branded – 004v2 LP Test.” This indicates both the source campaign and the type of experiment, with added detail about the URL is being tested.
After setting start and end dates for the experiment and determining how budget should be split, the experiment is ready to save and run.
Step 4: Evaluate experiment results
As tests run, you can monitor their progress by visiting the “Ad groups” tab within any experiment campaign. You will see a results module at the top of the page, like the one below.
If your selected date range contains any overlap with the testing period, results will show data only from the overlap days. If there is no overlap between your current date range selection and the testing period, then results for only the testing period will show.
You can adjust your selected date range to view results as you choose. The KPIs shown can also be customized using the dropdown above each reported metric. A noted percentage below each metric indicates the relative lift or decrease in Test performance compared to the Control. The grayed figures and blue stars indicate statistical significance if present.
This is a great way to evaluate post-click landing page performance as the test progresses. Furthermore, it gives you the power to extend the testing date range or conclude early based on statistical significance of data collected. Once a test does reach statistical significance, it’s also a breeze to end the experiment (if the Control won) or apply it (if the Test was the winner) using the respective links at the top of the page.
Closing thoughts on AdWords campaign experiments
With all the changes AdWords rolled out in 2017, it is clear that experiments will soon become the foundation of great PPC testing. Creating tests is quite easy, but the benefits extend far beyond that. Experiments allow advertisers to harness the power of machine learning while still providing visibility into certain elements of account behavior.
One word of caution, however, to those who may be tempted to embrace experiments a bit too readily: only one experiment can run per campaign at any given time. This means that campaigns running ad copy tests cannot simultaneously test smart bidding strategies, bid modifiers, custom audience targeting, etc. It is important to keep that limitation in mind as you plan and schedule various tests to run.
Nonetheless, in the case of post-click landing page testing, experiments are my new favorite AdWords feature. I no longer have to settle for testing a single variation of ad copy to run a split post-click landing page test, nor suffer through multi-variable testing with imperfect ad rotation and impression data. And, possibly most exciting of all, I can finally view the statistical significance of my tests in a quick glance, without any downloads, pivot tables, or statistical formulas required!