This page teaches you how to continuously improve conversion rates on your site.
This process is called A/B testing. These are experiments that assess the improvement in conversion rate (e.g. signup rate, checkout rate) from making changes to your site.
For example, you can rewrite the top half of your landing page. Or you can replace all your photography with illustrations. Or you can cut your page length in half.
These are called variants. (They are the "B" variants in the term "A/B.")
You test variants against your baseline, which is simply your homepage before changes were introduced. (Your baseline is the "A" variant in the term "A/B.")
There are free tools to manage all this testing logic for you. Your only job is to figure out what is worthwhile testing and to create the required landing page material.
A/B's aren't a luxury for marketers with spare time; they’re the lifeblood of growth hacking. They're the only way to scientifically improve conversion.
Here's how all marketers should adjust their approach: Don't waste time making the first iteration of a page a 10/10. Start with an 8/10 then A/B test it to perfection.
Because you will never have an optimal page from the start. It is irrational to try.
Instead, defer to your A/B tests to determine what's better. If you have a proper A/B regimen in place, it's the quickest, lowest-cost way to increase profits. Unlike ads, A/B tests cost nothing to run, and often increase conversion by 50-300%.
I see this improvement with nearly all my clients. It only takes a few days of work. Yes, there are severely diminishing returns to A/B testing after that. But that initial improvement is what makes or breaks paid user acquisition being profitable.
Very often, it makes or brakes growth.
l'll be covering:
Here's the testing cycle that I'll be covering in-depth:
Source test ideas from these places:
Before I dive further into what to test, figure out what you're testing for.
Consider this: If you discover that an A/B variant motivates visitors to click a button 10x more, but this button clicking behavior doesn’t actually lead to greater signups or purchases, then your variant isn’t actually better than the original. All it's done is distract users into clicking a button more.
An A/B variant is only better when it increases your bottom line. More revenue is better than more signups.
It's not hard to test a variant that A) decreases total signups by weeding out visitors who weren't ultimately going to pay while it B) simultaneously increases revenue by better incentivizing potential purchasers who were on the fence to pay.
So, if you're only monitoring signups as a metric of success, you'll miss this potential win. (Fortunately, an A/B tool like Google Optimize monitors multiple metrics.)
In short, consider end-of-funnel conversion when assessing the results of an A/B test.
That said, you’ll actually be mostly testing early steps in the funnel. For two reasons:
This page focuses on landing page changes.
Enough preamble. Let's begin!
There are two types of A/B test variants. I call them micro and macro variants.
Micro variants are adjustments to your page's copy, creative, and layout. These are small, quick changes. And, unfortunately, they're unlikely to have a huge impact.
Macro variants, on the other hand, are significant rethinkings of your page. Prioritize macros over micros.
Because changing the color of a button (a micro variant) never has more than 2-5% conversion impact. But restructuring the entirety of your page (a macro variant) can increase conversion by 50-300% if your original page hasn't been tested before.
It's doubly important to focus on big wins given every A/B test has an opportunity cost: There are only so many tests you can run in a month because you're limited by how much traffic your site gets.
Let's look at examples of micros and macros.
Here is your surface area for testing micro variants.
Despite micros being deprioritized relative to macros, I'm including them because if you piece together enough micros, you have yourself a macro.
That's a trick for when you run out of macro ideas.
When you run out of macros, these are the two micros with the greatest impact:
Macro variants require considerable effort: It’s hard to repeatedly summon the focus and company-wide collaboration needed to do rethink your page.
But macros are the only way to see the forest through the trees.
I generate macro ideas using five approaches:
To sanity check which of the resulting A/B ideas might actually resonate with visitors, consider passing them through the Sourcing A/B ideas section from above.
An A/B experiment has an opportunity cost; you only have so many visitors to test against. So prioritize your tests thoughtfully.
Use these five factors:
As you can see, A/B testing is a team-wide decision making process. Plan ahead.
A/B testing applies to all your business decisions. And many life decisions.
Consider how there are four possible outcomes for any decision:
On that last point: If you experience failure and learned nothing, you wasted time.
Plan your big decisions in such a way that failure will teach you something new and profound about how to make better future decisions. That way — even in abject failure — you can never truly lose.
So far, I've introduced two types of A/B variants (micro and macro), covered how to source ideas for each, plus how to prioritize them.
Now let's get into the logistics of actually running these tests.
I recommend running one experiment at a time.
Otherwise, visitors can criss-cross through multiple simultaneous tests if they change devices (e.g. mobile to desktop) across their visits. (A/B testing tools don't diligently track users.) This makes experiment results murky if not meaningless.
However, within one experiment, you can have several variants all testing a change on the same baseline page. Each variant receives the same amount of proportioned site traffic to test. Google Optimize will handle all this A/B testing logic for you.
After enough visitors have seen the experiment that your testing tool is confident which variant is best, you can end the experiment, decide if you want to implement the winner, then start a new experiment.
A/B tools test your variants in parallel. Meaning, your original page and its variants run at the same time. (The tool will randomly assign visitors to one or the other.)
If you were instead to manually run variants sequentially — meaning, one variant for 5 days followed by another variant for next for 5 days — the varying traffic sources and days of the week won’t be controlled for. This sullies the results.
So use A/B testing tools as they are intended to be: only run tests in parallel. (This is their default behavior; there's nothing you have to do.)
When setting up tests, consider who should be excluded from seeing them.
For example, consider only showing the experiment to visitors seeing your site for the first time. Otherwise, not everyone will arrive with the same knowledge: Some have previous expectations and data, which affects how they react to your variant.
To target just new users in Google Optimize, follow Example 1 in these instructions:
You've run your tests. Now you have to make sense of the results.
When assessing results, look for four things:
Let's walk through these.
When running A/B tests to improve conversion, you'll quickly get diminishing returns on conversion gains for your higher-intent traffic (e.g. organic search, referrals, and word of mouth). Because those are the visitors who came looking for you on their own merit. The onus is simply on you to affirm you sell what they're expecting, and to not scare them off.
In contrast, for paid ad traffic, A/B testing has the potential to provide larger returns. These are medium-intent eyeballs at best — usually people who errantly clicked your ad. They're looking for excuses to dismiss your value props and leave.
So, this is where A/B testing shines: it's more effective at significantly improving the conversion rate of low-to-medium intent traffic — through a more "read-baity" page.
When I run A/B tests on paid traffic, I can often improve conversion rates by 2-4x. That can make or break the profitability of ads. It's a big deal. However, when I A/B test with organic traffic, perhaps I see 1.5-2x improvements at best. (Assuming the landing page was good to begin with.)
Here’s the takeaway: If you only A/B against high-intent traffic, you may not notice a significant improvement and may mistakenly dismiss the test as a failure. When this happens, but you're confident the variant has potential, retry the test on paid traffic. That’s where the improvement may be large enough to notice its significance.
Statistics dictates that we need a sufficiently large sample to confidently identify a boost in conversion.
The math is very simple:
This means that if you don’t have a lot of traffic, the opportunity cost is too great to run micro variants, which tend to show conversion increases in just the 1-5% range. If it takes you weeks to hit 10,000 sample visits, you'll be poorly spending those weeks waiting for likely nothing more than a tiny change.
If you instead run macros, they have the potential to result in 10-20%+ improvements, which is well above the 6.3% threshold. You can determine big macro winners quickly.
Below is an example of an experiment I ran for a client (using Google Optimize):
Above, our page had 1,724 views throughout the testing period. There was a 30% (29/22) improvement in our test variant over our baseline (the original page).
This 30% number is likely inaccurate, by the way. It's just a reference for the variant's maximum potential. We don't yet have that many sessions to validate this conversion improvement with pinpoint certainty. But 30% is good enough to validate that we improved conversion by at least 6.3% (the number from earlier).
Here are those numbers again:
Pay attention to the Google Optimize column labeled Probability to be Best. If a variant’s probability exceeds 70% and has a sufficient number of sessions (e.g. 1,000 and 10,000 as I indicated above), the results are statistically sound, and that variant should be considered for implementation.
Now you have to decide if the labor and the potential externalities of implementation are worth a 6.3% (or possibly much more) improvement in conversion.
What if our results weren't conclusive? What if we didn't surpass a 70% certainty?
Had the experiment revealed merely a 2% increase, for example, we would have to dismiss the sample size of 1,724 views as too small for the 2% to be statistically valid.
We would have either ended the experiment and logged it as having a neutral outcome, or we would have continued awaiting the full 10,000 sessions. If, after 10,000 sessions, the 2% increase held, we would have concluded it's likely valid.
But, as mentioned in the previous section, if you have little traffic to begin with, perhaps don't risk waiting on a small, 2% change. The opportunity cost is high.
Unless that 2% increase is tied to a revenue objective (e.g. purchases) as opposed, to say, people signing up. (If it were just a 2% increase in signups, I'd say shrug it off.)
Point being, the closer an experiment's conversion objective is tied to revenue, the more worthwhile it is to patiently await small conversion boosts.
Off topic, to read handbooks (like the one you're reading now) a few months before I publish them, you can provide your email below. I'm releasing how to write fiction, think critically, and play piano. I only email once every three months.
✋🏼 Dope — you're good to go. Come say hello on Twitter.
🚨 Something went wrong. First, check that you're still connected to the Internet. Next, try again using Incognito mode in your browser — to ensure you have privacy/adblock extensions temporarily disabled, which can block email signups.
I have another handbook that's already out: The Science of Building Muscle.
Don't implement A/B variants that only win negligibly. (Define "negligible" relative to business outcomes you care about.)
In the short-term, this may appear benign. But, in the long-term, it may introduce unforeseen funnel consequences that can be difficult to pinpoint in retrospect.
This happens all the time.
Sharing is caring.
I use a task management tool, like Trello, to keep track of the A/B tests I'm running and considering in the future.
For every test I run, I note the following in a Trello task:
When the test finishes, I additionally make note of:
Keep Trello tasks neatly organized and refer to them before running more tests.
Three A/B testing takeaways:
My agency team will train your company to be much better at growth marketing.
Here's how it works:
Go here to learn which growth topics we teach.
Or, choose a page using the menu at the bottom of your screen.