You’re running a Smart Shopping campaign, and feeling pretty good about it. Performance is amazing, so you’re willing to forgive Google for their total lack of transparency in giving almost no data about these campaigns.
But then, one day the campaign just starts to tank. Nothing significant changes in other campaigns, and you are left perplexed.
You hope it’s a fluke and performance will go back up in a day.
But it doesn’t.
You go into the campaign and change the ROAS goals, or maybe you change the budget. Or, maybe you even pause the campaign and start a new one.
I’ve tried all these desperate attempts to fix Smart Shopping campaigns, and the interesting thing is that they’ve all worked.
But why? It’s not enough to just say that we’re comfortable pulling levers without understanding what is driving these algorithms.
We also want our campaign management to be proactive. Instead of waiting for performance to change and then randomly pulling levers in hopes that the system corrects itself, we need to find ways to anticipate these variables that might change performance, and proactively make changes to avoid negative results before they actually happen.
The curiosity to understand these variables, and the determination to use this knowledge to drive better campaign performance, sparked a massive retrospective study by our agency that considered Smart Shopping performance and trends across multiple accounts.
“If you’re not using automation, you’re losing out. Stop running away from it, change your old habits and embrace this new reality”
It seems like the theme of the last year or so has been to embrace change, particularly when it comes to Google’s automation.
And I am completely on board. Except when it comes to Smart Shopping; the campaigns which give you no essential insight into how they actually work.
We’ve seen many accounts where a Smart Shopping campaign initially works well, but then goes through a rough patch. As digital marketers, it’s our job to step in and make adjustments to improve performance.
Smart Shopping offer very little actual data (no search terms, placements…), and changes come down to switching the budget or changing your target ROAS.
With only one or two options available to you to help fix a campaign, you have to make the right change. Should you raise or lower your target ROAS?
When making this decision, you are going to have to choose between volume and performance. With an unlimited budget, the algorithm is not restricted to only seek out the cheapest conversions. Rather, the algorithm will go after more expensive conversions, so long as it thinks that it can average out to your target ROAS over the length of your conversion window.
So, as Patrick explains in his post Understanding Google’s Automated Bidding Strategies, increasing the target ROAS will make the algorithm more conservative, and likely compromise volume for ROAS goals, while decreasing the target ROAS will have the opposite effect.
But, that aspect of campaign optimization is still reactive.
My question is this: Is Smart Shopping limited to reactive optimization, or is there some way to predict what will happen (with some degree of accuracy), and adjust accordingly before it does?
This is where understanding the algorithm comes in to play.
I am not claiming to understand Google’s algorithm, not even a little bit. But we’ve run dozens of multivariate regression analyses to find statistically significant correlations in order to figure out what we can do besides sit and watch while the damage is done and clean it up afterward.
We’ve run analyses that combine data from multiple, similar accounts to support these trends, and we’ve run the same analyses inside individual accounts to determine specific optimization recommendations for our team.
For this article, I used data from Smart Shopping campaigns ranging from $600 to $8,000 in daily spend in an effort to understand these statistical relationships and establish how it plays out in campaigns that vary in volume.
Earlier, I described how various changes to ROAS goals, budgets, and other campaign settings have helped correct negative Smart Shopping performance, and while it may seem that each of these reactions is different, there is one key commonality between them; they each force the campaign back into a “learning phase.”
During a learning phase, less weight is placed on historical performance, instead allowing the algorithm to test more random hypotheses and learn what works best.
Once a campaign exits the learning phase, it invests the majority of your budgets in areas where it is most confident that it can drive results, largely based on recent historic performance data.
In the event that an established campaign takes an unannounced nosedive, the dip in performance must have been related to some sort of data in the account which told the Smart Shopping algorithm to go in the wrong direction.
That is, the algorithm is optimizing with the wrong data.
Looking at the campaigns, there is usually a day that deviated from the average sometime before the performance starts to decline.
Take an account where your target ROAS is 400% and usually gets 50 conversions per day with 400% ROAS. If you run a sale, and generate 100 conversions with 600% ROAS one day, it will significantly change the account performance history.
The algorithm sees the numbers, thinks it did a great job, and will start to bid more aggressively. But once the sale stops running, conversion rate drops, following a decline in total conversions and ROAS.
After a few days of aggressive spend with lackluster results, the algorithm will tighten up and be more conservative, likely becoming too conservative (compromising volume), before it eventually levels out.
In order to avoid this scenario, you can adjust ROAS goals after a standout day and send the algorithm back in to the learning phase.
In our example, you would lower the target ROAS.
By sending the campaign back into the learning phase, even for a short time, you are allowing the system to place less emphasis on recent performance data and more emphasis on new tests that will allow you to reach your ROAS goals.
But when should you change your ROAS goals?
The short answer is that it depends.
Not every campaign is created equally, and they definitely don’t go by the same rules. When running multivariate regression analyses to understand which metrics correlate with the number of impressions on a given day, we found no clear answer such as “Google looks at the last week of data to determine how many impressions to get.”
It really depends on the account, and more specifically the number of clicks you get per day in the campaign. Because the algorithm is constantly learning from the data that comes in, the more data you give it per day, the less time it takes for it to learn from it.
Looking at data from multiple accounts, I found that conversion rate and ROAS are associated with future net-change in impressions. Days with higher ROAS and conversion rates are often followed by days with more impressions.
That is, if you are exceeding your performance goals, the smart bidding algorithm will be more aggressive and go after more traffic.
This does not necessarily mean you will see a rise in CPC. Rather, since available inventory for Smart Shopping spans across all Google Ad channels (Shopping, YouTube, Display, Gmail, etc.), there is almost always an opportunity to increase the number of impressions so long as budget is available.
Therefore, all things equal, impressions can be used as a measuring stick to better-understand Google’s level of confidence when it comes to driving results for your account.
So, if Google is confident that it can spend more and still drive performance, then you will see a net-increase in your Smart Shopping impressions.
However, time-sensitivity varies by account. Higher-volume accounts will see more emphasis placed on more recent performance data (in the hours leading up to a given auction). Lower-volume accounts will see that impact spread out over several days.
The more average daily clicks a campaign gets, the less time is needed to gather enough data to make decisions. Taking this one step further, if there is a significant change one day, the “outlier” data will impact a longer time frame for an account with less data than for an account with more data.
Based on the shopping campaigns I analyzed, the metrics which are correlated with the number of future impressions are conversion rate and ROAS. Days with higher conversion rates and ROAS are often followed by days with a net-increase in impressions, and days with lower conversion rate and ROAS are often followed by days with fewer impressions.
While I have been referring to days, I am really referring to general time periods, because when looking at larger campaigns with over 5,000 clicks per day, hourly data becomes much more important. For these campaigns, the combined conversion rate and ROAS from the previous 12 hours have a strong correlation with the number of hourly impressions for the 13th hour.
Conversely, as seen in the following graph, in campaigns with half the amount of daily clicks, hourly conversion rate and ROAS have very little correlation with a net-change in impressions in the following hours.
Going back to the high volume campaigns, the correlation between conversion data (conversion rate and ROAS) and its impact on future impressions declines over time.
That is, for high-volume campaigns, the algorithm is more likely to place emphasis on conversion data that was acquired in the last few hours, and is less likely to consider conversion data that was acquired a week prior.
This is summarized in the chart below:
On the other hand, for lower-volume campaigns, the correlation between conversion data (conversion rate and ROAS) and future impressions increases over time, meaning that Google looks at a longer time frame of conversion data when deciding if it should show an impression.
That is, for lower-volume campaigns, the algorithm is less likely to consider conversion data that was acquired within the last hour, and is more likely to use the full scope of conversion data that was acquired over the last 30-45 days.
This is summarized in the chart below:
There are two components that Google uses to decide if you will show an impression at an auction.
The first is using account-based data. This includes the historical data in the account, such as conversions, conversion rate, CTR, and ROAS.
The second is user-based data, which refers to the millions of data points Google has on each of us. The time of day the user is searching, their search history, where they are etc.
The two of these together help Google determine the expected conversion rate, and ultimately if the bidding algorithm should show an ad in this auction.
Our analysis is focused on the account-based data, which is the part of the equation that we can control.
The data tells us that the effect of an exceptional day (in either direction) will vary depending on the volume in the campaign. Higher volume campaigns will see a more immediate effect, while lower volume campaigns may take a little longer to be impacted.
Going back to our example, if the campaign gets a larger number of clicks per day (over 5000), I would decrease the target ROAS right away, and then bring it back up within a few days. But, if the campaign usually gets fewer clicks, I would wait a few days to lower the target ROAS, and leave it at the lower target for a while before bringing it up again.
The right approach really depends on the amount of data in each campaign, and while you may not know the exact time to change the target ROAS, having an understanding of how Smart Shopping campaigns work can help diagnose problems, and more importantly, prevent them.
With Smart Shopping, there aren’t many possible optimizations to make, so doing the right thing is that much more important.
We'll get back to you within a day to schedule a quick strategy call. We can also communicate over email if that's easier for you.