Conversion Optimization Minidegree at CXL — Review Part 8

Viktorija Cekanauskiene
6 min readApr 18, 2021

This week I will review 4 courses from CXL Conversion Optimization program:

  1. A/B Testing Foundations (Peep Laja)
  2. How To Run Tests (Peep Laja)
  3. Testing Strategies (Peep Laja)
  4. Statistics for A/B Testing (Georgi Georgiev)

This is my 8th review of my 12-week journey and I will cover the above-mentioned three courses one by one.

A/B Testing Foundations

You always have to keep that student learner mindset, growth mindset. I’ve been doing AB testing for more than a decade. If I had to guess which AB test hypothesis is going to win, I get it right maybe 60% of the time and that’s just hardly better than flipping a coin.

In this course, Peep Laja teaches how to run statistically valid A/B tests and what to test at an introductory level.

To summarize, I have learned that A/B testing is for validation and learning. It eliminates guessing and allows to say exactly how well a change is working on a website. So basically we should think about testing as a measurement and a way to solve problems.

The most important part about A/B testing is choosing what to test and this requires prioritization. There are multiple frameworks for this including the PXL framework that Peep has come up with himself. His advice is to start closest to the money.

The holy trinity of A/B testing:

  1. P-value of 0,05 or less (95% is an acceptable level when false-positive goes down)
  2. Statistical power 80% (probability of not making type 2 error, 80% is an accepted risk level)
  3. Enough sample size (traffic is always your limitation)

How To Run Tests

Testing is no joke — you have to test right. Bad testing is even worse than no testing at all — the reason is that you might be confident that solutions A, B, and C work well while in reality, they hurt your business.

In this course, Peep is talking about the fact that testing is a key part of conversion optimization. It’s the only way to validate a hypothesis, to know what’s really working, and until something is tested it’s only guessing. Testing is the only way to ensure that every change produces positive results.

Constantly testing and optimizing a page will increase revenue, signups, or any other key metrics while providing valuable insights about the target audience.

Some important and interesting information I liked and want to remember when working with A/B tests:

  • The conversion rate is not a fixed number, it fluctuates daily and monthly. So if you measure results by displaying Version A for one week and then Version B for one week, it won’t be an accurate comparison.
  • You can never be sure which external events affect your conversion rates (like Christmas, Mother’s day, beautiful weather, political unrest, scandals in the media, or stuff your competitors do).
  • Wireframing comes in once you’ve decided which page you’re gonna run the treatment on. It’s a communication tool that tells people what you want and what the treatment should be like. It’s what you use to communicate your hypothesis. And it’s important to make it clear that wireframing is not designing.
  • As a very rough ballpark Peep typically recommends ignoring the test results until you have at least 350 conversions per variation or more.
  • Testing is about having enough data to validate. All significance reporting will be wrong if the sample size is too small.
  • While specific micro-conversions (like add to cart) are important metrics to track, you need to track final purchases and revenue. Your treatment might get more people to add products to the cart, but it might also make fewer people complete the checkout.
  • We need to keep a document called Customer Theory. That’s where we always write down what we know and what we think we know about different types of users we have. Customer theory consists of buyer personas and overall documentation of what has worked, what hasn’t and what might work. So with every single test that we run we update this document, it is never complete. Also, when thinking about your next test, you can always look at your customer theory documents to see what should be taken into account (like what type of messaging seems to work).
  • Every test needs a hypothesis, but before we even get to the hypothesis, we need to conduct conversion research so we have an overview of all the problems we have on the site and what we know about users. So all hypotheses should come from the results of conversion research: heuristic analysis, qualitative and quantitative research.
  • Come up with as many hypotheses as possible. For each identified problem you should come up with as many ideas for solutions as possible. The hypothesis statement should include a description of the problem, solution and what you expect to change.
  • When the test is over you need to conduct a post-test analysis. So first of all, you see what the overall picture is and you want to make sure that the test result is actually accurate because there are many validity threats that can make the test results invalid.

And my three favorites:

  • #1 rookie mistake is stopping the test too early.
  • This leads to the rule to not stop the test just when you reach 95% confidence. Because if you stop your test as soon as you see the significance, there’s a 50% chance it’s a fluke.
  • Every day without an active test is a day wasted.

Testing Strategies

Once you know how to run tests, the next step is to know when to employ which A/B testing strategy.

Peep says this is the million-dollar question. Testing the wrong things results in wasted time, effort and money. You have to know what actually makes a difference.

This course talks in more detail about some aspects covered in the two previous courses. There are8 lessons and it covers what to test, how many changes per test, A/B testing versus multivariate testing, bandits testing, existence testing, Iterative Testing & Learning From Results, Innovative Testing, Split Path Testing.

Statistics

A/B testing is about managing risk. And so, the mere fact of testing is a positive already regardless of the outcome because if you think about it, no matter the outcome, if you use the right procedure, you’ve either limited the risk, and made the wrong decision,or you’ve limited the risk, and the right decision. And, in both cases, you’ve done your job.

To be honest this course was my least favorite course at CXL so far. It felt very academic and all the calculation formulas presented reminded me that my brain doesn’t exactly work in this way :) This is also funny because that’s what the last lesson of this course talked about. It’s about how to communicate the statistical results.

Here I’d like to mention that the fact I didn’t enjoy this course doesn’t mean there’s something wrong with it or that it’s not worth attention. I have enjoyed the last lesson of this course and so I will talk a bit more about it.

The instructor Georgi talks about the need to educate the stakeholders, especially on the topic of random variability. He says clear rules need to be set for each test before it even starts. This helps to avoid disappointment :)

During the presentation, we can use graphs, stories, tables, or a combination of them. Since many people have difficulty understanding numbers, it’s always good to have a story, it’s always good to have a verbal presentation of the same things.

You can present the same number in a different way. For example, 0.05 can be written as 5%, but can also be written as one in 20. Different people understand one of these three ways of stating the same number more easily than the others.

Stories should be self-contained in terms of chapters especially if the report is long. Different people are reading different sections, which are of interest to them, they do not need to read the whole report to get to the right understanding.

The instructor says it is good to follow the journalistic style:

you start with the most important insights, continue with the more technical information, things of second-degree interest, and segmentation, all that should go below.

See you next week!

--

--