A/B Testing
Header image featuring woman eating, sleeping, and working out

This is page three of a handbook on Growth Hacking. Begin here.

Landing page A/B testing

This page teaches you how to continuously improve conversion rates on your site. 

This process is called A/B testing. These are experiments that assess the improvement in conversion rate (e.g. signup rate, checkout rate) from making changes to your site.

For example, you can rewrite the top half of your landing page. Or you can replace all your photography with illustrations. Or you can cut your page length in half.

These are called variants. (They are the "B" variants in the term "A/B.")

You test variants against your baseline, which is simply your homepage before changes were introduced. (Your baseline is the "A" variant in the term "A/B.")

There are free tools to manage all this testing logic for you. Your only job is to figure out what is worthwhile testing and to create the required landing page material.

A/B testing is required

A/B's aren't a luxury for marketers with spare time; they’re the lifeblood of growth hacking. They're the only way to scientifically improve conversion.

Here's how all marketers should adjust their approach: Don't waste time making the first iteration of a page a 10/10. Start with an 8/10 then A/B test it to perfection.

Because you will never have an optimal page from the start. It is irrational to try.

Instead, defer to your A/B tests to determine what's better. If you have a proper A/B regimen in place, it's the quickest, lowest-cost way to increase profits. Unlike ads, A/B tests cost nothing to run, and often increase conversion by 50-300%.

I see this improvement with nearly all my clients. It only takes a few days of work. Yes, there are severely diminishing returns to A/B testing after that. But that initial improvement is what makes or breaks paid user acquisition being profitable.

Very often, it makes or brakes growth.

Topics

l'll be covering:

This is the wordiest page in the guide. Because I've seen first-hand how few people understand the importance of A/B tests. I need to provide context.

How A/B testing works

Here's the testing cycle that I'll be covering in-depth:

  1. Decide what change to make. What do you think might increase conversion?
  2. Use Google Optimize (an A/B testing tool) to show half your visitors the change.
  3. Run this test long enough to get a statistically significant sample of visitors.
  4. Once enough data has been collected, Google Optimize will report the likelihood that your changes caused a significant difference in conversion. If it caused a significant positive difference, it's up to you to implement in the winning variant's changes into the existing page.
  5. Log what you changed, why you made the change, and what the results were. This log helps you avoid conducting overly similar experiments in the future.
  6. Repeat steps 1-5 until you run out of ideas. Never have downtime; every day of the year, an A/B test should be running. Or you're letting your traffic go to waste.
I recommend Google Optimize for running A/B tests. It's free, full-featured, and conveniently integrated into Google Analytics.

Sourcing A/B ideas

Source test ideas from these places:

A/B testing and the growth funnel

If you haven't read my intro to the growth funnel, see the bottom of this page.

Before I dive further into what to test, figure out what you're testing for.

Consider this: If you discover that an A/B variant motivates visitors to click a button 10x more, but this button clicking behavior doesn’t actually lead to greater signups or purchases, then your variant isn’t actually better than the original. All it's done is distract users into clicking a button more.

An A/B variant is only better when it increases your bottom line. More revenue is better than more signups.

It's not hard to test a variant that A) decreases total signups by weeding out visitors who weren't ultimately going to pay while it B) simultaneously increases revenue by better incentivizing potential purchasers who were on the fence to pay. 

So, if you're only monitoring signups as a metric of success, you'll miss this potential win. (Fortunately, an A/B tool like Google Optimize monitors multiple metrics.)

In short, consider end-of-funnel conversion when assessing the results of an A/B test.

That said, you’ll actually be mostly testing early steps in the funnel. For two reasons:

This page focuses on landing page changes.

Enough preamble. Let's begin!

What to A/B test on your landing page

There are two types of A/B test variants. I call them micro and macro variants.

Micro variants are adjustments to your page's copy, creative, and layout. These are small, quick changes. And, unfortunately, they're unlikely to have a huge impact.

Macro variants, on the other hand, are significant rethinkings of your page. Prioritize macros over micros. 

Because changing the color of a button (a micro variant) never has more than 2-5% conversion impact. But restructuring the entirety of your page (a macro variant) can increase conversion by 50-300% if your original page hasn't been tested before.

It's doubly important to focus on big wins given every A/B test has an opportunity cost: There are only so many tests you can run in a month because you're limited by how much traffic your site gets.

Let's look at examples of micros and macros.

Micro variants

Here is your surface area for testing micro variants.

Despite micros being deprioritized relative to macros, I'm including them because if you piece together enough micros, you have yourself a macro.

That's a trick for when you run out of macro ideas.

Standalone micros that are actually worthwhile

When you run out of macros, these are the two micros with the greatest impact:

Macro variants

Macro variants require considerable effort: It’s hard to repeatedly summon the focus and company-wide collaboration needed to do rethink your page. 

But macros are the only way to see the forest through the trees

Since the biggest obstacle to testing macros is simply committing to them, I urge you to create an A/B testing calendar and rigorously adhere to it: Create a recurring event for, say, every 2 months. On those days, spend a couple hours brainstorming a macro variant for a pivotal page or product step.

I generate macro ideas using five approaches:

To sanity check which of the resulting A/B ideas might actually resonate with visitors, consider passing them through the Sourcing A/B ideas section from above.

Prioritizing A/B tests

An A/B experiment has an opportunity cost; you only have so many visitors to test against. So prioritize your tests thoughtfully. 

Use these five factors:

As you can see, A/B testing is a team-wide decision making process. Plan ahead.

A/B testing beyond websites and apps

A/B testing applies to all your business decisions. And many life decisions.

Consider how there are four possible outcomes for any decision:

On that last point: If you experience failure and learned nothing, you wasted time.

Plan your big decisions in such a way that failure will teach you something new and profound about how to make better future decisions. That way — even in abject failure — you can never truly lose.

Setting up A/B tests

So far, I've introduced two types of A/B variants (micro and macro), covered how to source ideas for each, plus how to prioritize them.

Now let's get into the logistics of actually running these tests.

How many A/B tests to run

I recommend running one experiment at a time. 

Otherwise, visitors can criss-cross through multiple simultaneous tests if they change devices (e.g. mobile to desktop) across their visits. (A/B testing tools don't diligently track users.) This makes experiment results murky if not meaningless.

However, within one experiment, you can have several variants all testing a change on the same baseline page. Each variant receives the same amount of proportioned site traffic to test. Google Optimize will handle all this A/B testing logic for you.

After enough visitors have seen the experiment that your testing tool is confident which variant is best, you can end the experiment, decide if you want to implement the winner, then start a new experiment. 

Parallel versus sequential testing

A/B tools test your variants in parallel. Meaning, your original page and its variants run at the same time. (The tool will randomly assign visitors to one or the other.)

If you were instead to manually run variants sequentially — meaning, one variant for 5 days followed by another variant for next for 5 days — the varying traffic sources and days of the week won’t be controlled for. This sullies the results. 

So use A/B testing tools as they are intended to be: only run tests in parallel. (This is their default behavior; there's nothing you have to do.)

Consider only targeting new users

When setting up tests, consider who should be excluded from seeing them.

For example, consider only showing the experiment to visitors seeing your site for the first time. Otherwise, not everyone will arrive with the same knowledge: Some have previous expectations and data, which affects how they react to your variant.

To target just new users in Google Optimize, follow Example 1 in these instructions:

How to configure targeting settings in a Google Optimize experiment

Assessing A/B test results

You've run your tests. Now you have to make sense of the results.

When assessing results, look for three things:

Let's walk through these.

Sample sizes

Statistics dictates that we need a sufficiently large sample to confidently identify a boost in conversion.

The math is very simple:

This means that if you don’t have a lot of traffic, the opportunity cost is too great to run micro variants, which tend to show conversion increases in just the 1-5% range. If it takes you weeks to hit 10,000 sample visits, you'll be poorly spending those weeks waiting for likely nothing more than a tiny change.

If you instead run macros, they have the potential to result in 10-20%+ improvements, which is well above the 6.3% threshold. You can determine big macro winners quickly.

Below is an example of an experiment I ran for a client (using Google Optimize):

Read the docs (parts one and two) to learn how to interpret these results.

Above, our page had 1,724 views throughout the testing period. There was a 30% (29/22) improvement in our test variant over our baseline (the original page).

This 30% number is likely inaccurate, by the way. It's just a reference for the variant's maximum potential. We don't yet have that many sessions to validate this conversion improvement with pinpoint certainty. But 30% is good enough to validate that we improved conversion by at least 6.3% (the number from earlier).

Here are those numbers again:

Pay attention to the Google Optimize column labeled Probability to be Best. If a variant’s probability exceeds 70% and has a sufficient number of sessions (e.g. 1,000 and 10,000 as I indicated above), the results are statistically sound, and that variant should be considered for implementation.

Now you have to decide if the labor and the potential externalities of implementation are worth a 6.3% (or possibly much more) improvement in conversion.

Sample sizes and revenue

What if our results weren't conclusive? What if we didn't surpass a 70% certainty?

Had the experiment revealed merely a 2% increase, for example, we would have to dismiss the sample size of 1,724 views as too small for the 2% to be statistically valid. 

We would have either ended the experiment and logged it as having a neutral outcome, or we would have continued awaiting the full 10,000 sessions. If, after 10,000 sessions, the 2% increase held, we would have concluded it's likely valid.

But, as mentioned in the previous section, if you have little traffic to begin with, perhaps don't risk waiting on a small, 2% change. The opportunity cost is high.

Unless that 2% increase is tied to a revenue objective (e.g. purchases) as opposed, to say, people signing up. (If it were just a 2% increase in signups, I'd say shrug it off.)

Point being, the closer an experiment's conversion objective is tied to revenue, the more worthwhile it is to patiently await small conversion boosts.

Unrelated, to read handbooks (like the one you're reading now) a few months before I publish them, subscribe below. I'm releasing how to write well, think critically, and play piano. I email once every three months.

I have another handbook that's already out: The Science of Building Muscle.

Don't implement negligible wins

Don't implement A/B variants that only win negligibly. (Define "negligible" relative to business outcomes you care about.)

In the short-term, this may appear benign. But, in the long-term, it may introduce unforeseen funnel consequences that can be difficult to pinpoint in retrospect.

This happens all the time.

How to share results with your team+

Sharing is caring. 

I use a task management tool, like Trello, to keep track of the A/B tests I'm running and considering in the future.

For every test I run, I note the following in a Trello task:

When the test finishes, I additionally make note of:

Keep  Trello tasks neatly organized and refer to them before running more tests.

Here's the point

Three A/B testing takeaways:

Next page

Onboarding

How to onboard users so they fall in love with your app.

Or, choose a page using the menu at the bottom of your screen.

Next page →