Conversion optimization has become something of a buzzword in the last few years, hasn’t it?
My colleague Len Markidan of Groove said something earlier this year that resonated with me:
It’s sad but true; “increases conversions” has become the cliche rallying cry of hundreds of products around the web.
Unfortunately, it’s not just products. Plenty of blogs peddle “conversion optimization tips” to get you trusting they’re an authority on the subject.
This, in itself, is nothing new. SEO agencies did this 10 years ago, and 5 years ago more people did this with Social Media Marketing.
The trouble with conversion optimization tips though, is that the lexicon lends itself to sounding more authoritative.
Statistical significance, cohort, sample size… these are terms scientists and statisticians use, and sound really impressive.
“They ran an A/B test on the ‘frequent buyers’ cohort, and found a 100% lift in conversions with 99.95% statistical significance, when visitors were presented with the big orange button.”
Hard to argue with, right? But before you burst into the design department screaming, “CHANGE ALL THE BUTTONS TO ORANGE!!!” think about what you just read.
What you don’t see is that the test included 100 visitors, run over the span of 2 days, from 5 different traffic sources, and customers from 20 different regions. Also, the amount of clicks on the button went from 3 to 6.
My mentor Peep Laja wrote a must-read article titled “The 12 A/B Testing Mistakes I See Businesses Make All The Time” that’ll help you avoid making misinformed decisions off your own junk data. Running an A/B test that provides you with insight is not what this piece is about.
Instead, this is a story of how credible media outlets were tricked into spreading idyllic sounding but false, damaging information on the premise of bad science and manipulated statistics.
It has nothing to do with websites or optimization, but mirrors the fantasy narrative around conversion rate optimization that is peddled through marketing circles on a daily basis.
My hope is this story will help you look at conversion advice a little more skeptically.
How One Man Fooled The World Into Thinking Eating Chocolate Helps You Lose More Weight.
In March of 2015, you may have read about how Johannes Bohannon P.h.d and a team of researchers at The Institute for Diet and Health found that eating dark chocolate in combination with a low carb diet can help you lose 10% more weight.
The study divided volunteers aged 19 - 65 into three groups to “find out whether consuming chocolate in combination with dietary interventions has no effect or it makes such diets even more effective in the right dose.” (Thehealthsite.com)
The first group followed a strict low-carb diet, the second group followed the same low-carb diet with an added 42 grams of dark chocolate (81%), and the third group acted as a control, changing nothing about the way they ate.
According to the study, the low-carb group lost weight when compared to the control group, but surprisingly, the low-carb group plus chocolate lost 10% more. These were statistically significant results, meaning there was only a 5% chance the results are due to random fluctuations.
By the author’s own admission, this data though technically accurate, was beyond flawed and “laughably flimsy”. Even worse, the entire study was designed with the sole purpose of getting media attention.
In reality, there is no Johannes Bohannon P.h.d. or The Institute for Diet And Health - that’s a fictional name and a wordpress site using the boardwalk theme.
John Bohannon P.h.d, a medical journalist contributing to a documentary about junk-science and the diet industry media complex, however is very real. His P.h.d is also real, in molecular biology of bacteria, which is to say, nowhere near being related to diet or health.
As a journalist, John was skeptical about how seriously other health journalists would take the "news":
“Could we get something published? Probably. But beyond that? I thought it was sure to fizzle.
We science journalists like to think of ourselves as more clever than the average hack. After all, we have to understand arcane scientific research well enough to explain it. And for reporters who don’t have science chops, as soon as they tapped outside sources for their stories—really anyone with a science degree, let alone an actual nutrition scientist—they would discover that the study was laughably flimsy.
Not to mention that a Google search yielded no trace of Johannes Bohannon or his alleged institute. Reporters on the health science beat were going to smell this a mile away.
But I didn’t want to sound pessimistic. “Let’s see how far we can take this,” I said.”
The research was accepted into numerous “pay-for-inclusion” science journals. Though the journal they ultimately selected claimed to be peer-reviewed, the study was published within 2 weeks of payment being accepted with absolutely no changes.
Once the research was published to the scientific journal, they submitted a press release, and several outlets such as Huffington Post, The Daily Mail, and Cosmopolitan started circulating the story almost verbatim, while only asking perfunctory questions of the original researchers .
No one bothered to ask some very basic questions like “how many people were included in the study?”, “What low-carb diet were they on?” or “How much weight did they lose?”
“The key is to exploit journalists’ incredible laziness. If you lay out the information just right, you can shape the story that emerges in the media almost like you were writing those stories yourself.
In fact, that’s literally what you’re doing, since many reporters just copied and pasted our text.”
Why The Research Was Flawed (And Why You Need To Be More Skeptical About What You Read)
Had anyone asked, they’d learn that only 15 people were included in the clinical trial.
They would have learned that these 15 people were being measured on 18 different things; weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc. and that the trial was purposefully designed to be a breeding ground for false positives.
See, when you measure a large amount of things over a small group of people, you increase the likelihood of finding some “statistically-significant” result, even though that result might have little to do with what you’re testing.
“Think of the measurements as lottery tickets. Each one has a small chance of paying off in the form of a “significant” result that we can spin a story around and sell to the media. The more tickets you buy, the more likely you are to win.
We didn’t know exactly what would pan out—the headline could have been that chocolate improves sleep or lowers blood pressure—but we knew our chances of getting at least one “statistically significant” result were pretty good.”
John’s team was doing what’s known as “P-hacking” - which is when the experiment’s design is manipulated to push the data into statistical significance - they had a 60% chance of getting some statistically significant result.
Even if they hadn’t purposefully manipulated the data though, the study was still doomed.
First off, when you analyze a small group of people, you leave yourself vulnerable to a number of uncontrollable factors tainting your data.
For instance, metabolism slows down significantly after age 25, making it difficult to lose weight, or a woman’s menstrual cycle can cause her weight can fluctuate as much as 5 pounds.
This is why larger groups of people, with similar age, gender distributions, and diets, are needed to account for these random factors to balance and/or cancel each other out.
“You can’t even trust the weight loss that our non-chocolate low-carb group experienced versus control. Who knows what the handful of people in the control group were eating? We didn’t even ask them.”
Nonetheless, the story spread through many reputable sites without question.
It’s not that these sites were trying to purposefully mislead people, but rather I think it’s a symptom of these outlets trying to feed the content monster, and have something that will attract page views and therefore advertising dollars.
Why You Should Care
Which brings me to the whole point of why I’m sharing this story with you.
In marketing optimization, there are no shortage of stories about “How a minor change in X increases Y by Z%!”
Many of these stories are written by well meaning marketers who want to share their achievements with their peers, but are also unknowingly designing flawed experiments and reporting on false positives. In other cases, of course, the information is purposefully misreported for the sake of a good headline, positive PR, or to get you to buy something.
You should care, because as marketers we’re desperate for reliable information regarding how to get more out of our budgets.
As “conversion optimization tips” become the new “social media marketing” it’s important you’re able to distinguish the good advice from the bad, so you don’t get suckered into making big decisions based on bad science.
Because there is no official peer review process in the marketing space, and publishers are extra hungry to “provide valuable content”, the barrier to publishing wrong, potentially damaging information is significantly lower than in official scientific circles.
Fortunately, there are also experts who regularly publish and share valuable information on conversion rate optimization that are open to clearing up any misconceptions.
All of this is not to say you should doubt everything you read, just try to approach conversion optimization tips with a healthy dose of skepticism, and always ask for more context when you can.Or like the saying goes, “if it sounds too good to be true, it probably is.”
About The Author
Tommy Walker is the Editor-in-Chief of the Shopify Plus blog. It is his goal to provide high-volume ecommerce stores with deeply researched, honest advice for growing their customer base, revenues and profits. Get more from Tommy on Twitter.