Most winning A/B tests are illusory
Marketers are questioning the value of A/B testing, asking: ‘Where is my 20% uplift? Why doesn’t it appear in the bottom line?

Three concepts to help you create a winning A/B testing methodology
This academic paper shows that badly performed A/B tests can produce winning results which are more likely to be false than true. At best, this leads to the needless modification of websites; at worst, to modification which damages profits.
We introduce three simple concepts which come as second nature to statisticians but have been forgotten by many web A/B testing specialists: ‘statistical power’, ‘multiple testing’ and ‘regression to the mean’. Armed with them, you will be able to cut through the misinformation and confusion that plague this industry.
What you'll learn:
- The importance of statistical power
- Why stopping tests early leads to false positives
- How to perform validation tests to ensure accuracy
- Why A/B tests tend to overestimate uplift
