With a plethora of digital media channels at our disposal and new ones on the way every day(!), how do you prioritize your efforts?
How do you figure out which channels to invest in more and which to kill?
How do you figure out if you are spending more money reaching the exact same current or prospective customers multiple times?
How do you get over the frustration of having done attribution modeling and realizing that it is not even remotely the solution to your challenge of using multiple media channels?
Oh, and the killer question… if you invest in multiple channels, how much incrementality does each channel bring to your bottom-line?
Smart Marketers ask themselves these questions very frequently. Primarily because we don't live in a let's buy prime time television ads on all three channels and reach 98% of the audience world any more.
We have to do Search Engine Optimization. We have to do Email Marketing. We have to do Paid Search. We have to have a robust Affiliate network. We have to do Social Media. We have to do location-based advertising to squarefour people. We can't forget Mobile advertising. We have to… the list is almost endless. Oh, and in case you had not noticed… the real world is still there. TV and radio and print and… Oh my!
The reality is that we can't do all of those things.
Smart Marketers work hard to ensure that their digital marketing and advertising efforts are focused on the most impactful portfolio of channels. Maybe it is Search, Email and Facebook. Maybe it is Affiliate and Paid Search. Maybe TV and Twitter and Newspapers. Maybe it is five other things.
How does one figure it out?
What's that? This: You understand all the environmental variables currently in play, you carefully choose more than one group of "like type" subjects, you expose them to a different mix of media, measure differences in outcomes, prove / disprove your hypothesis (DO FACEBOOK NOW!!!), ask for a raise.
It is that simple.
Okay, it is not simple.
You need people with deep skills in Scientific Method, Design of Experiments, and Statistical Analysis. You need the support of the top and bottom and middle of your organization (and your agency!). You need to understand all the environmental variables in play (a hard thing under any scenario) not just in context of your company but also as they relate to your competitors and ecosystem.
But if you have access to some or all of that (or can hire good external consultants), then your rewards will be very close to entering heaven. Marketing heaven that is.
To make the case for controlled experiments I want to share with you one simple, real world example I was involved with.
My explicit agenda is to spark an understanding of the value of even simple controlled experiments (that might need only some of the horsepower mentioned above).
My secret agenda is to illuminate the power of this delightful methodology via a simple example, and get you to invest in what's needed to move from good to magnificent.
This is a multi-channel example. The company truly has a portfolio strategy when it comes to marketing. We are going to simplify the example to make it seem like they only do two things. They mail catalogs and they send emails. The purpose of both is also simple: Get people to convert online (website) or offline (call center).
The question was, should they do both? Should they love one more than the other? Is this digital thing here to stay or is the thing that has worked so well for 150 years – catalogs – the thing that still works ("the internet is a fad!")? What is the incremental value of doing email?
To answer this question the company took their customer lists (catalog and email) and identified like-type customers. Like-type as in customers that share certain common attributes. For your business, that could be people who have been customers for 5 years (or 5 months) or those that order only women's underwear or those that live in states that start with W or those that order more than 10 times a year or only men or people who were born on Jupiter or… (this is where design of experiments comes in handy :).
Then they isolated regions of the country (by city, zip, state, dma pick your fave) into test and control regions.
People in the test regions will participate in our hypothesis testing. For people in the control region, nothing changes.
It is also important to point out that I am keeping the data simple purely to keep communication of the story straightforward. We'll measure Revenue, Profit (the money we make less cost of goods sold), Expense (cost of campaign), Net (bottom-line impact).
What is missing in these numbers is the cost of…. well you. The people. A little army in your company runs the TV campaigns. A larger army is the catalog sending machine. A lone intern is your email campaign people cost. A team of five are your paid search samurais. When you do this, if you can, include that expense as well.
Enough talk, let's play ball!
The Experiment and Results.
Here's the outcomes data for the control version of the experiment. This group of customers was sent both the catalog and the email. Nothing was changed for them – this group was treated exactly as they were in the past.
If the company did both things, revenue was $12.
Because revenue is very often a misleading way to understand impact on the company's bottom-line, most smart people prefer to go for net impact (the result of taking out cost of goods, campaign expenses etc.).
In this case, that amounted to a bottom-line impact of $2.59.
[If you want to learn how a focus on the bottom-line, especially net profit can change your life, and I mean that literally, please see this video: Agile, Outcomes Driven, Digital Advertising. Parts two and three, Rockin' Teen and Adult (Ninja!).]
Here's the data for variation #1 of the experiment… this group of like-type of customers were only sent the catalog – no email. The marketing messaging and timing and all other signals for relevancy and offers used for this group was exactly the same as the control group.
Compared to the control group, revenue went from $12 to $10. Company expense went down a little bit (email campaigns after all are not free).
The net impact went down from $2.59 to $2.00.
17% reduction in revenue, 23% negative net impact to the bottom-line.
Does that help you understand the incrementality delivered by the campaign that is missing in this variation of the experiment (email in this case)? You betcha!
No politics. No VP of Email vs VP of Catalog egos and opinions involved. No you are trying to mess with my budget spit on your face. No but that is not what Guru x at a conference said or but that is not what people on Twitter think. None of that. Just data.
How sweet is that?
Here are the results of variation #2… this group just got the email. The killing of trees, filling of recycle bins, and breaking the backs of postal carriers was paused. :)
Again, and I can't stress this enough, all else was equal.
Compared to our control group there was a whopping 29% reduction in revenue. OMG!
But, a bigger OMG is coming: the net impact on the bottom-line of the company was a measly 2%! OMG!!
So the incremental value delivered by combining a catalog campaign with an email campaign is an increase of 2% on the bottom-line of this company.
Not for every company on the planet. Not even for all campaigns you do. But for this campaign and these types of customers you can confidently say: "Yes there was a drop in revenue and if you care about that, oh beloved HiPPO, then let's send more catalogs. But at least now you know the net incrementalism delivered to our bottom-line from doing that."
If your HiPPO is smart, and in my experience many HiPPOs are smart and well-meaning, shewill ask you this: "Is that 2% ($0.05) sufficient to cover the salaries, pensions, health benefits of everyone we employ to do catalog marketing?"
Controlled experiments also allow us (Analysis Ninjas) to do some subversive work. A question that came to my mind was: What is the incrementality of doing any marketing at all? What would happen if we do nothing, and we retire all our marketing people? Would the company go under?
Now it is rare that questions like those get asked. But it is too tempting not to use this methodology to get a sense for what the answers might be.
So for variation #3, no catalogs or email were sent to the customers in the test group. Here are the results…
It turns out if you completely stop marketing, and you are an established company, the impact is not that your revenue goes to zero! :)
Revenue in this variation went down 58% (pretty big). The impact on net to the bottom-line was a reduction of 42%. Both not great, but not zero.
So now you have some sense of what is the incrementality of all the people in marketing (their salaries, pensions, expenses etc.), and what you have to compute is if it is less than or greater than $1.09 (the loss in net impact).
Talking just a smidgen more seriously, eliminating catalogs and emails (and all marketing) might not make the company bankrupt immediately. But that is simply an outcome within the confines of this experiment. And it is easy to imagine how the impact might just get worse over time. The nice thing is that you can also test that!
Good lord I love this stuff!
The Lessons from this Controlled Experiment.
It is possible to compute incrementality of adding or removing marketing strategies.
It is possible to go back and use this incrementality to make solid, long-term new decisions for the business (and not to keep doing what you have forever until your business goes bankrupt).
It is possible to take politics, bickering, back stabbing and all that ridiculous stuff out of the picture. Okay maybe not all of it, but a lot of it.
It is possible to determine the value of doing Paid Search campaigns for brand terms where you already rank #1 via SEO. It is possible to understand if you should invest in Facebook at all. It is possible to understand how much to support your TV campaigns via Yahoo! display campaigns. It is possible to specifically nail down every incremental dollar added to the bottom-line of adding YouTube to your Search campaigns and then adding radio campaigns and then adding magazine ads and then adding Twitter. And along that chain it is possible to understand exactly when you've reached diminishing margins of return!
Important: The lesson you should not take from this is that catalogs don't work. They may work for you, they may not. All you should take away are the possibilities outlined above.
Here are some important bits of context, and a few more lessons I've learned from having done this a bunch of times…
* The results you see above are raw end results. The team did the normal modeling to ensure that the results were statistically significant (large enough sample set, sufficient number of conversions in each variation).
* It is not always easy to get exact replica (like type) customer sets. There are always things that are a little bit beyond your control. Do the best you can.
* Work as hard as you can, and then some, to ensure that there are as few "disturbances" in your test and control group. In the middle of the experiment don't start a new paid search campaign or tweeting like a crazy duck to the same set of customers. Shout loudly until the entire company knows what you are up to (and beg for their co-operation).
* No answer is ever definitive, so act on the results immediately.
* In the same spirit the best companies in the world know that you are in a constant testing mode. There are so many factors that can affect your results. Seasonality, shifting consumer behavior, competitive landscape changes, disruptive product introductions, new technologies, legalization of illegal things, so many more things.
So you test, learn, rinse, repeat, become awesomer.
If you want to learn more about controlled experiments, and see more examples and a case study, please jump to Chapter 7 and page number 205 in your copy of Web Analytics 2.0.
Bonus: Here's one of my favorite articles… all the way from 2007 but chock full of pithy valuable lessons for all of us regardless of our field: 41 Timeless Ways to Screw Up Direct Marketing
Bonus 2: Google Analytics has a wonderful set of reports called Multi-Channel Funnels. They are very good at showing how many outcomes are delivered via multiple media channels (say search + Facebook + display campaigns vs search only). They are also very good at telling you the order in which these channels were exposed to the person. It is important to know this is happening, and how much. Mutichannel funnel reports won't answer the questions at the top of this report. It might tell you how urgent it is to answer them (see this video, min 21 onwards: Google Analytics Visits Change). Even if you use other dedicated tools in the market that do "attribution modeling" you still won't get the precise answers you need to optimize your channels. Your only path out? Controlled experiments. Go back up and read this post again. :)
Okay it’s your turn now.
Are controlled experiments a part of your marketing and analytics portfolio? If yes, would you share one that perhaps was your favorite? If no, what are the barriers to adopting them in your company? Having read this post what might be the biggest downside to experimentation? What do you find exciting?
Please share your feedback, excitement (or lack there-of), life lessons via comments.