Back To All Articles

A Guide To Understanding Google's Automated Bidding Strategies

March 17, 2019
|
AdWords Tips

In Join or Die, It’s Time To Embrace Google Automation, I outlined the case for trusting Google machine learning and the overarching benefits that are unlocked with new technology. But unfortunately it’s not as simple as clicking a button, switching to automation, and retiring to Costa Rica.

We all need to learn about this technology and how to make it work in our favor. It's a complicated topic that, when used incorrectly, could be disastrous.

We recently inherited an account that had relied on manual bidding. As we went through the account together and found issues with campaign structure and conversion tracking, a colleague of mine observed that they were fortunate to not have been relying on any automation, as the machine would have been optimizing against their goals.

It was humbling to consider the dangerous implications of using this technology incorrectly, either through poor data integrity or negligence. This post is an attempt to help limit the latter. 

Much of this information is not publicly available and is a result of countless conversations I've had with Google's product team over the last 18 months or so (I also passed this article along to them to ensure its validity before publishing).

All of this information has been confirmed by our own experience managing smart-bidding campaigns. We've tested all of these strategies at great length, and you'll find an example case study toward the bottom of the article.

We have celebrated wins and learned from losses along the way... our approach to automation is algorithmic its own regard. We continue to run countless tests and confirm or throw out hypotheses of our own, increasing the probability of particular outcomes given existing datasets.

I hope that our learnings serve as a meaningful Bayesian inference on your own educational journey. That is, I hope that this information serves as a jump off point for you to confidently run tests and develop new hypotheses of your own about how to properly use this technology.

Those that are reluctant to embrace automation often say that it either doesn't work or it won't work for their specific account. I disagree. It's likely they that they experimented and failed because they did not put in the time and effort to learn how to make it work.

So in the end, if you still choose to be a Luddite, at least be an informed Luddite.

The Basics of Google's Automated Bidding Strategies

Here’s a summary of what you should understand before experimenting with automated bidding strategies:

  1. A Manual CPC strategy is completely reliant on the context of a search term, whereas automation leverages other behavioral signals that are a more effective means of anticipating user behavior. Short-tail or ambiguous keywords become much more profitable when you are able to layer behavioral signals on top of the User's search query.

  2. Manual CPC assumes that all users, given a specific search term, are created equal. If there are 1,000 people searching for Gaming Laptop, it is foolish to assume that each one of those users deserves the same bid.
  3. Manual bid adjustment strategies like Day Parting become less relevant when using automation. Bid adjustments based on time of day were useful when you could find statistically significant trends in conversion rate, but all Users that perform searches at 2 pm on Saturdays are not created equal.

    Google now has the ability to hone in on the specific User at the moment that they are performing the search, and consider variables that you could not dream of.

    Google would know if Users that recently browsed your competitors catalog are more likely to convert on your site. They'd also know if Users are less likely to download your whitepaper while traveling on mass transit. Both these searches might be taking place simultaneously, and I'm willing to trust Google to determine which price I should bid for each.

  4. In the past, you may have optimized your campaigns with the goal of improving vanity metrics like Actual Cost-Per-Click, Search Impression Share, Average Position or Click-Through Rate. These metrics carried more significance in a time where we had less access to quality conversion data and machine learning.

    You can now focus your attention on the most important metrics: Conversions and ROAS.

    CPC, for example, mattered a great deal when you had to make vast generalizations about the quality of Users that could be acquired through a given keyword. However, automation now allows you to single-out individual Users and consider the micro-moment in which they are searching.

    Generalizations, and generalized metrics like Average CPC, should not be used to determine failures or successes. Rather, profitability should guide these assessments.
  5. Your Actual Cost-Per-Click will increase, but that's a result of bidding on more premium traffic. You should be OK with this.
  6. Quality Score is not an indicator of success or a factor in what you are paying for each ad click (Quality Score is no longer used to determine your Ad Rank and is simply a diagnostic tool.).

    Quality Score and Ad Rank are correlated, but the former has no direct impact on the latter. A complex set of variables including expected CTR, landing page experience, relevance, behavioral signals, expected impact and predicted user conversion rate for an individual advertiser are evaluated in real time to determine Ad Rank.

    A low QS is similar to a check engine light in a car. You wouldn't bring your car to a mechanic and say "Fix my check engine light." Rather, you would say, "Find out why the check engine light went on and fix that." 

    There's also times where the check engine light goes on by mistake. Similarly, there are times where Google's aggregated quality estimates are incorrect.

    Put simply, you should not pause a keyword due to low Quality Score. You should make these evaluations strictly on bottom-line performance.

    For more info on this, I recommend checking out my article on why we should stop talking about Quality Score.
  7. Search Impression Share should not be used to predict market size or your potential to scale. It is a relative metric that indicates the percentage of impressions you earned compared to auctions that you entered.

    You do not enter every auction for keywords that you bid on, even if you have an unlimited budget. Many factors, including Ad Rank Thresholds, will keep you out of many auctions.

    And without properly leveraging automation, you might be entering into the wrong auctions altogether. This is a theme we will revisit throughout the article.
  8. Conversion rate is a metric that should garner a lot of your attention.
  9. You should be coming up with creative ways to add more conversion data into your campaigns to help guide Google’s algorithms.
  10. SKAGs are useless. There was a brief period of time several years ago where this was an effective strategy and it made for an interesting blog post that everyone wanted to write about. If your goal is to create massive account that looks impressive to your client, then go ahead and build all the SKAGs you want. But you are likely spreading yourself too thin.

    Yes, there is a time and place for an occasional single-keyword-ad-group, such as specific Exact Match keywords that you need to separate out from the crowd. But “SKAG” as a whole is nothing more than an outdated strategy with a neat acronym.
  11. Enhanced CPC is not a recommended bid strategy in any scenario (this has come directly from conversations with Google). Enhanced CPC is a hybrid strategy where you will enter 50% of auctions with a manual bid and 50% of auctions with a smart bid.

    The algorithm will never be able to consistently confirm hypotheses if it only has access to 50% of the data.

    It’s likely that Google will sunset this feature very soon. My guess as to why this has not happened already is because it’s a baby step that many advertisers take on their path to detoxing from manual bidding and embracing automation.
  12. We are often asked which types of technology we layer on top of Google Ads to help optimize campaigns (bid management software, for example). The answer is none.

    Aside from sounding impressive in client pitches, what purpose does bid management software serve? Why would you trust the capabilities of xyz program over Google, when the latter has access to trillions of data points that no other company has? 

    Learn to work within the platform, folks.
  13. If your competitors are leveraging automation and you are not, you will be left to fend for the low quality scraps of traffic that will remain after your competitors scoop up the Users with most conversion-intent. This is especially true if you are not OK with the concept of your CPC rising and still feel that a low CPC is preferable.

    This is a seriously important takeaway.
    More than 50% of Google ad revenue is currently coming from automated bidding, so there’s a greater than 50% chance that your Manual CPC campaigns are currently facing this issue.

    This concept is illustrated in the following graphic. The higher quality traffic will come at a premium cost, and if your competitors are willing to pay that premium and you are not, you will be left to fend for the traffic below the dotted line...

CPC Demand Curve



Automation: Algorithms vs. AdWords Scripts

Many digital advertisers have carelessly used the phrase automation to describe both algorithmic and statistical models. This is an incomplete and incorrect viewpoint.

AdWords scripts are not algorithms, they are statistical models. Scripts might automate tasks, but you should not think of them as true automation. This label should be reserved for advanced technology that provides more value than a time-saving script.

What separates an algorithm from a statistical model is the ability to learn, without increased human input, and improve over time to reach a goal.
It’s a living, breathing machine that is constantly asking new questions and looking to confirm various hypotheses on an ongoing basis… increasing its own intelligence along the way.

Compare a bidding algorithm to, say, the widely used (and now irrelevant) Bid to Position script. The script is a model that will adjust manual bids up or down with the goal of reaching a desired average ad position.

Think about that for a moment...

 ...with the goal of reaching a desired average ad position.

That's not a real goal, but more on that later.

The implementation of this script is incredibly manual. The advertiser must determine what the desired average position is for each keyword, and will likely have to manually change that desired position over time.

The script never gets smarter. It runs once an hour and asks a series of questions:

  1. Does actual average position match desired average position? If yes, do nothing. If no, go to question #2.

  2. Is average position lower than desired position? If yes, reduce manual bid. If no, increase manual bid.

That’s it. The script will not learn from mistakes or victories and it will do nothing to help you improve performance over time unless a human being manually makes changes to its inputs.

Also, it’s worth noting that average position is a vanity metric that only serves to help understand real performance metrics. It is unhelpful and dangerous if one of your goals is to reach a desired average position.

Despite what some may think, an average position of 1 will not actually put money in the bank.

A separate flaw in this model is that you might be achieving an average position of 1, but only on the second page of the search results. In this case, your bids are much too low, but the script is not intelligent enough (and does not have the necessary feedback loop) to correct itself.

These are some of the reasons why Google is sunsetting the average position metric.

Real automation accomplishes two things that scripts do not:

  1. They optimize for your actual goals (total conversions, return on ad spend, etc.).

  2. They get better over time, learning things that are incomprehensible to human advertisers, using data that non-Googlers would never have access to.

(I also just want to say that I will always love the bid to position script because it saved me a ton of time over the years and provided a ton of value to our agency. But we now have more advanced means of reaching our goals.)

Training an algorithm is like training a puppy. Algorithms respond to positive and negative feedback and care deeply about being rewarded by their owner.

The only difference is that a puppy is rewarded with a treat, whereas the algorithm is rewarded with more budget.


The Maximize Conversions Bidding Algorithm

Predicted Conversion Rate is the main variable that drives all Google bidding algorithms. The differences lie in the end goal and the unintended outputs.

Maximize Conversions is the most basic of the Google Smart Bidding algorithms. As its name suggests, this algorithm works to obtain the largest quantity of conversions, and essentially ignores all other outputs (Cost-Per-Click, Cost-Per-Conversion, ROAS, etc.).

It’s useful to imagine conversions as something that you need to purchase. All conversions are not created equal, and some conversions are more expensive than others.

By selecting Maximize Conversions as your bidding strategy, you are allowing Google to buy you all kinds of conversions, regardless of their cost.

The conversions you buy will fall along the following line:

Maximize Conversions Google Ads Google AdWords Smart Bidding Automated Bidding Strategy


With an unlimited budget, you will continue to purchase more and more expensive conversions and acquire as many conversions as possible in your given market.

Average CPC and CPA will skyrocket, but you’ll capture the greatest amount of conversions and become a market leader.

While certainly appealing, this is not an economically viable scenario for most advertisers.

Lesson #1: Maximize Conversions is best used in campaigns with a limited budget.

If you limit the amount of budget that can be spent in a given day, it forces the algorithm to focus on the least expensive conversions, due to the end goal of acquiring the largest quantity of conversions.

Say for example that your potential conversions will range in cost from $1 to $100 per conversion. If your daily budget is $100, the algorithm will not be doing its job if it spends the entire budget on just one $100 conversion.

Instead, the algo will work to first acquire as many conversions as possible that cost just $1, and then, if budget remains, will gradually increase the price in which it’s willing to pay for a conversion until the budget is maxed out.


The Target CPA Bidding Algorithm

This algorithm is nearly a clone of the Maximize Conversions algorithm. The primary difference is that tCPA will consider another variable, Cost-Per-Conversion, as part of its goal.

That is, this algo still works to achieve the greatest quantity of conversions, but will factor the output of Cost-Per-Conversion into its feedback loop.

The important distinction here is that CPA only comes after a conversion is acquired. The resulted output is then factored into further hypotheses and tests that the algorithm will run.

Think of it like this:

  1. User A is performing a search for your product. The Target CPA bidding algorithm uses predicted conversion rate and other variables to bid on, and eventually purchase a conversion from this search. The cost for this conversion was X.

    **This step is the same for Maximize Conversions. The following steps are where the algorithm becomes more advanced.

  2. User B is now performing a search for your product. The tCPA algo uses predicted conversion rate and other variables, which now include the X cost of acquiring the User A conversion, and a hypothesis about similarities between User A and User B, to bid on and purchase a conversion from this search. The cost for this conversion is Y.

    The similarity hypothesis between Users is either confirmed or thrown out when this conversion takes place. Either way, the algorithm is now smarter than it was before.

  3. When User C performs a search, the algorithm is now able to tie similarities between User A, User B, and User C. With more data, the algo is able to come up with more creative hypotheses and will likely confirm a larger share of those hypotheses over time.


This is why your actual CPA will vary over time in these campaigns.

The number that you set as your Target CPA (or ROAS, when using tROAS) should not just be what your end goal is for the campaign. This number is arbitrary and only reflects how aggressive you are allowing the algorithm to bid.

For example, your end goal might be to consistently acquire conversions at $50, but if your campaign has been consistently earning conversions at $100, then you should not set your Target CPA at $50 from the onset.

The algorithm will not have strong enough hypotheses at that time and you are reducing the amount of positive feedback that would otherwise help confirm new hypotheses.

You should start with a reasonable goal and adjust over time.

Ultimately, the algorithm will be smart enough to purchase conversions that will average out around your target (the general time that is considered is about 14 days, but this may vary for accounts that have significant Time Lag to Conversion).

Consider the impact of a change in your Target CPA, in the sense of buying conversions that vary in price:


Target CPA Google Ads Google AdWords Smart Bidding Automated Bidding Strategy


Compare this to if you had a higher, less constrained Target CPA:


Target CPA Google Ads Google AdWords Smart Bidding Automated Bidding Strategy




Lesson #2: Your budget for a Target CPA campaign should be at least 10x your target.

This is just a rule of thumb. If you have a massive campaign with 10 or more ad groups, then you might need even more budget to allow the algorithm to work for you.

If you have a smaller campaign with lower-volume keywords, then you might be able to get away with 5x budget.



The Target ROAS Bidding Algorithm


Target Return on Ad Spend is more complex as it has to factor in the value that is earned from each conversion.

In this case, you are still buying conversions that come with varying costs, but every conversion comes with a different value (and it cannot be assumed that these two variables are positively correlated).

That is, a more expensive conversion does not necessarily guarantee greater conversion value.

In addition to predicted conversion rate, Target ROAS must factor in predicted conversion value.

There are many variables that will be considered when predicting conversion value. The search term or product in the shopping feed is obviously one factor, but there are behavioral signals tied to the user performing the search that are also considered.

Two users might perform the same search for a low priced product, but User A is looking to buy a single item and User B is likely to result in a bulk order. These are factors that should be (and are) considered.

In order to determine your ideal bid, you must compound the rate at which you believe each user to convert, multiplied by the value at which you predict them convert at.

And I’m relieved that we have algorithms to do that for us in real time, for every search that is performed on Google.

The result is an algorithm that buys conversions for you yielding varies returns that should average out to your target, with lower returning (i.e. less profitable) conversions on the far right of the chart:


Target ROAS Google Ads Google AdWords Smart Bidding Automated Bidding Strategy

Raising the Target ROAS, and therefore forcing the algorithm to become more conservative, will have a similar effect to what we saw with Target CPA... fewer possible conversions:


Target ROAS Google Ads Google AdWords Smart Bidding Automated Bidding Strategy

Lesson #3: Target ROAS will only work well when you have a lot of conversion data.

If a campaign does not earn at least 20-30 conversions per day, you should either proceed with caution or use micro-conversions to add more conversion data into the system.


Final Takeaways - An Automation Case Study

Most accounts that we manage use a mixture of all three strategies. There are many strategies that surround campaign structure, where various bid strategies can be implemented to maximize your goals. But that’s a can of worms that I’ll leave for a future post.

Below is an example of an account of ours that has heavily transitioned toward automation over the last few months. You'll see a lot of these concepts in action:

Google Automated Bid Strategy Case Study

Takeaway #1 - Search campaign CPC has increased by an astounding 27%. This is not an issue, as Conversion Rate has increased by 41%. As long as your Conversion Rate increases at a rate greater than your CPC, you will be more profitable. This is illustrated by the 18% lift in ROAS (Conv. value / Cost)

Takeaway #2 - Search Impression Share has decreased. As I mentioned earlier, SIS is a relative metric and does not indicate the actual size of the market. Previously, this account had been entering into lower-quality ad auctions and winning a larger percentage of impressions. That is longer the case. We are now entering higher quality auctions (which is proven by our increased CPC and Conversion Rate).

Also, of the 65% of auctions that we are entering into and losing, we are losing less auctions due to Ad Rank. This indicates that when a very high quality search is performed, Google is ensuring that our bid is high enough to win an impression.

Takeaway #3 - Only one of the campaigns shown above is using the Maximize Conversions bid strategy. Note that this campaign is the only example that has a decreased CPC over time. This is a result of the limited budget strategy outlined above, where this campaign is working hard to buy the lowest cost conversions before exhausting its budget

Takeaway #4 - I mean, is it really necessary to point this out? The client's goal was to return at 4.5x but had been returning at just 4.15x. Proper implementation of automation was one factor that helped bring this return to 4.85x.

Here's the realized value of these improvements:

- The client had a goal of 4.5x, meaning that they would have been willing to invest $626,689.59 in advertising in hopes to return $2,829,103.02 in revenue.

- Total revenue amassed $3,051,521.27, which resulted in $222,418.25 in profits that could be reinvested in new tests to scale and earn additional revenue.

This stuff works, you guys.

over 90 companies requested a proposal last month

Request A Marketing Proposal

We'll be in touch with you no later than tomorrow, June 10th to schedule a quick call. After that call, we'll send you an awesome proposal that outlines our fees and our services. It's quite fun.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Visit Us

Headquarters
1157 Broadway, Suite F
Hewlett, NY, 11557
View On Map

General Inquiries

Apply To Work

Check Out Our work (We Love Showing It off)