In Join or Die, It’s Time To Embrace Google Automation, I outlined the case for trusting Google machine learning and the overarching benefits that are unlocked with new technology. But unfortunately it’s not as simple as clicking a button, switching to automation, and retiring to Costa Rica.
We all need to learn about this technology and how to make it work in our favor. It's a complicated topic that, when used incorrectly, could be disastrous.
We recently inherited an account that had relied on manual bidding. As we went through the account together and found issues with campaign structure and conversion tracking, a colleague of mine observed that they were fortunate to not have been relying on any automation, as the machine would have been optimizing against their goals.
It was humbling to consider the dangerous implications of using this technology incorrectly, either through poor data integrity or negligence. This post is an attempt to help limit the latter.
Much of this information is not publicly available and is a result of countless conversations I've had with Google's product team over the last 18 months or so (I also passed this article along to them to ensure its validity before publishing).
All of this information has been confirmed by our own experience managing smart-bidding campaigns. We've tested all of these strategies at great length, and you'll find an example case study toward the bottom of the article.
We have celebrated wins and learned from losses along the way... our approach to automation is algorithmic its own regard. We continue to run countless tests and confirm or throw out hypotheses of our own, increasing the probability of particular outcomes given existing datasets.
I hope that our learnings serve as a meaningful Bayesian inference on your own educational journey. That is, I hope that this information serves as a jump off point for you to confidently run tests and develop new hypotheses of your own about how to properly use this technology.
Those that are reluctant to embrace automation often say that it either doesn't work or it won't work for their specific account. I disagree. It's likely they that they experimented and failed because they did not put in the time and effort to learn how to make it work.
So in the end, if you still choose to be a Luddite, at least be an informed Luddite.
Here’s a summary of what you should understand before experimenting with automated bidding strategies:
Many digital advertisers have carelessly used the phrase automation to describe both algorithmic and statistical models. This is an incomplete and incorrect viewpoint.
AdWords scripts are not algorithms, they are statistical models. Scripts might automate tasks, but you should not think of them as true automation. This label should be reserved for advanced technology that provides more value than a time-saving script.
What separates an algorithm from a statistical model is the ability to learn, without increased human input, and improve over time to reach a goal.
It’s a living, breathing machine that is constantly asking new questions and looking to confirm various hypotheses on an ongoing basis… increasing its own intelligence along the way.
Compare a bidding algorithm to, say, the widely used (and now irrelevant) Bid to Position script. The script is a model that will adjust manual bids up or down with the goal of reaching a desired average ad position.
Think about that for a moment...
...with the goal of reaching a desired average ad position.
That's not a real goal, but more on that later.
The implementation of this script is incredibly manual. The advertiser must determine what the desired average position is for each keyword, and will likely have to manually change that desired position over time.
The script never gets smarter. It runs once an hour and asks a series of questions:
That’s it. The script will not learn from mistakes or victories and it will do nothing to help you improve performance over time unless a human being manually makes changes to its inputs.
Also, it’s worth noting that average position is a vanity metric that only serves to help understand real performance metrics. It is unhelpful and dangerous if one of your goals is to reach a desired average position.
Despite what some may think, an average position of 1 will not actually put money in the bank.
A separate flaw in this model is that you might be achieving an average position of 1, but only on the second page of the search results. In this case, your bids are much too low, but the script is not intelligent enough (and does not have the necessary feedback loop) to correct itself.
These are some of the reasons why Google is sunsetting the average position metric.
Real automation accomplishes two things that scripts do not:
(I also just want to say that I will always love the bid to position script because it saved me a ton of time over the years and provided a ton of value to our agency. But we now have more advanced means of reaching our goals.)
Training an algorithm is like training a puppy. Algorithms respond to positive and negative feedback and care deeply about being rewarded by their owner.
The only difference is that a puppy is rewarded with a treat, whereas the algorithm is rewarded with more budget.
Predicted Conversion Rate is the main variable that drives all Google bidding algorithms. The differences lie in the end goal and the unintended outputs.
Maximize Conversions is the most basic of the Google Smart Bidding algorithms. As its name suggests, this algorithm works to obtain the largest quantity of conversions, and essentially ignores all other outputs (Cost-Per-Click, Cost-Per-Conversion, ROAS, etc.).
It’s useful to imagine conversions as something that you need to purchase. All conversions are not created equal, and some conversions are more expensive than others.
By selecting Maximize Conversions as your bidding strategy, you are allowing Google to buy you all kinds of conversions, regardless of their cost.
The conversions you buy will fall along the following line:
With an unlimited budget, you will continue to purchase more and more expensive conversions and acquire as many conversions as possible in your given market.
Average CPC and CPA will skyrocket, but you’ll capture the greatest amount of conversions and become a market leader.
While certainly appealing, this is not an economically viable scenario for most advertisers.
Lesson #1: Maximize Conversions is best used in campaigns with a limited budget.
If you limit the amount of budget that can be spent in a given day, it forces the algorithm to focus on the least expensive conversions, due to the end goal of acquiring the largest quantity of conversions.
Say for example that your potential conversions will range in cost from $1 to $100 per conversion. If your daily budget is $100, the algorithm will not be doing its job if it spends the entire budget on just one $100 conversion.
Instead, the algo will work to first acquire as many conversions as possible that cost just $1, and then, if budget remains, will gradually increase the price in which it’s willing to pay for a conversion until the budget is maxed out.
This algorithm is nearly a clone of the Maximize Conversions algorithm. The primary difference is that tCPA will consider another variable, Cost-Per-Conversion, as part of its goal.
That is, this algo still works to achieve the greatest quantity of conversions, but will factor the output of Cost-Per-Conversion into its feedback loop.
The important distinction here is that CPA only comes after a conversion is acquired. The resulted output is then factored into further hypotheses and tests that the algorithm will run.
Think of it like this:
This is why your actual CPA will vary over time in these campaigns.
The number that you set as your Target CPA (or ROAS, when using tROAS) should not just be what your end goal is for the campaign. This number is arbitrary and only reflects how aggressive you are allowing the algorithm to bid.
For example, your end goal might be to consistently acquire conversions at $50, but if your campaign has been consistently earning conversions at $100, then you should not set your Target CPA at $50 from the onset.
The algorithm will not have strong enough hypotheses at that time and you are reducing the amount of positive feedback that would otherwise help confirm new hypotheses.
You should start with a reasonable goal and adjust over time.
Ultimately, the algorithm will be smart enough to purchase conversions that will average out around your target (the general time that is considered is about 14 days, but this may vary for accounts that have significant Time Lag to Conversion).
Consider the impact of a change in your Target CPA, in the sense of buying conversions that vary in price:
Compare this to if you had a higher, less constrained Target CPA:
Lesson #2: Your budget for a Target CPA campaign should be at least 10x your target.
This is just a rule of thumb. If you have a massive campaign with 10 or more ad groups, then you might need even more budget to allow the algorithm to work for you.
If you have a smaller campaign with lower-volume keywords, then you might be able to get away with 5x budget.
Target Return on Ad Spend is more complex as it has to factor in the value that is earned from each conversion.
In this case, you are still buying conversions that come with varying costs, but every conversion comes with a different value (and it cannot be assumed that these two variables are positively correlated).
That is, a more expensive conversion does not necessarily guarantee greater conversion value.
In addition to predicted conversion rate, Target ROAS must factor in predicted conversion value.
There are many variables that will be considered when predicting conversion value. The search term or product in the shopping feed is obviously one factor, but there are behavioral signals tied to the user performing the search that are also considered.
Two users might perform the same search for a low priced product, but User A is looking to buy a single item and User B is likely to result in a bulk order. These are factors that should be (and are) considered.
In order to determine your ideal bid, you must compound the rate at which you believe each user to convert, multiplied by the value at which you predict them convert at.
And I’m relieved that we have algorithms to do that for us in real time, for every search that is performed on Google.
The result is an algorithm that buys conversions for you yielding varies returns that should average out to your target, with lower returning (i.e. less profitable) conversions on the far right of the chart:
Raising the Target ROAS, and therefore forcing the algorithm to become more conservative, will have a similar effect to what we saw with Target CPA... fewer possible conversions:
Lesson #3: Target ROAS will only work well when you have a lot of conversion data.
If a campaign does not earn at least 20-30 conversions per day, you should either proceed with caution or use micro-conversions to add more conversion data into the system.
Most accounts that we manage use a mixture of all three strategies. There are many strategies that surround campaign structure, where various bid strategies can be implemented to maximize your goals. But that’s a can of worms that I’ll leave for a future post.
Below is an example of an account of ours that has heavily transitioned toward automation over the last few months. You'll see a lot of these concepts in action:
Takeaway #1 - Search campaign CPC has increased by an astounding 27%. This is not an issue, as Conversion Rate has increased by 41%. As long as your Conversion Rate increases at a rate greater than your CPC, you will be more profitable. This is illustrated by the 18% lift in ROAS (Conv. value / Cost)
Takeaway #2 - Search Impression Share has decreased. As I mentioned earlier, SIS is a relative metric and does not indicate the actual size of the market. Previously, this account had been entering into lower-quality ad auctions and winning a larger percentage of impressions. That is longer the case. We are now entering higher quality auctions (which is proven by our increased CPC and Conversion Rate).
Also, of the 65% of auctions that we are entering into and losing, we are losing less auctions due to Ad Rank. This indicates that when a very high quality search is performed, Google is ensuring that our bid is high enough to win an impression.
Takeaway #3 - Only one of the campaigns shown above is using the Maximize Conversions bid strategy. Note that this campaign is the only example that has a decreased CPC over time. This is a result of the limited budget strategy outlined above, where this campaign is working hard to buy the lowest cost conversions before exhausting its budget
Takeaway #4 - I mean, is it really necessary to point this out? The client's goal was to return at 4.5x but had been returning at just 4.15x. Proper implementation of automation was one factor that helped bring this return to 4.85x.
Here's the realized value of these improvements:
- The client had a goal of 4.5x, meaning that they would have been willing to invest $626,689.59 in advertising in hopes to return $2,829,103.02 in revenue.
- Total revenue amassed $3,051,521.27, which resulted in $222,418.25 in profits that could be reinvested in new tests to scale and earn additional revenue.
This stuff works, you guys.
We'll be in touch with you no later than tomorrow, June 10th to schedule a quick call. After that call, we'll send you an awesome proposal that outlines our fees and our services. It's quite fun.