There are three types of virality:
Inherent virality not only applies to UGC products, but also to products that can only have their value props realized once friends/coworkers are invited onto the platform.
For example, when you want to sync your files with a coworker, you must send them an invite to your Dropbox folder. Then they have to create a Dropbox account.
As with UGC, it's in users' self-interest to invite others.
In general, virality and referrals are most likely to succeed when:
In both cases, you don't have to coerce users into referring — they're going to want to do it on their own.
That's the best type of social product to have.
Referrals can be defined as:
Incentivized inorganic sharing.
Referrals is the strategy you employ when users aren't already inviting each other.
The most common implementation is dual-ended rewards: The referrer gets cash back and the person being referred gets a discount on their first purchase.
This is a successful growth tactic for many companies. But, it's typically a small source of user acquisition. Because most users simply don't care about earning a bit of cash. They didn't start using your app to make a few bucks.
So here's how you think about it...
The best referral programs dole out value that's aligned with the product's key value prop. Meaning, instead of giving out cash, they'll give you more access to the product.
For example, with Dropbox, referring a friend gets you X many more GB of storage.
Yes, this is effectively the same thing as being given Y amount of cash — if you converted Dropbox's cost per GB. But, unless you can give huge cash bounties (higher than, say, $50 per person), people aren't compelled into referring as much as they are when you tell them they get free units of the thing they started using your app for!
Isn't that the whole point of why they signed up?
If you don't have a product that can be doled out in chunks (e.g. GB's of storage, videos hosted, postcards sent), then your cash bonus needs to be significant:
As I discussed on the Onboarding page, first-time user experience is critical.
This includes the first-time experience for people who are referred.
Make sure invited users are handheld through the referral process with as little friction as possible.
Don't let them land on a signup page that immediately instructs them to claim their referral reward if they haven't yet been pitched on what your product is and why they should be excited for it.
And remind them of the extra value they're getting out of signing up through their friend's referral link.
When measuring referral virality, you optimize three metrics:
If you multiply the last two numbers together, you get your Viral Coefficient. A viral coefficient above 1 is an indicator of extreme growth potential.
If you couple that with a short lag time between signup and referral, you will experience quick viral growth.
I'm in the process of writing this section 😄
Sign up to be alerted when when it's out.
Let’s return our focus to ad engagement.
There are numerous variables for assessing ad performance. There are ad engagement rates (e.g. views, expands, comments), CTR's, and landing page conversion rates.
Many of these variables are channel-specific. And because channels revise their ad products regularly, growth marketers come to lack confidence they’re correctly interpreting metrics.
To avoid this confusion, familiarize yourself with the following four metric faux-pas.
Ad channels report more than just the number of users who clicked through to your website. They also report intermediary metrics for ad engagement. Examples include hovering over your ad unit, clicking the play button on your video ad, expanding your ad to see its comments and replies, and leaving comments to begin with.
For the most part, you should ignore these metrics when determining which ads are ultimately successful. Consider this: Say you’re confused as to why many people view the first 10 seconds of your video but few then click through to your site afterward:
Every ad unit is subject to these types of idiosyncratic channel behaviors. The variation may not be as extreme as the picture I’m painting, but the takeaway is universal: For all your ad channels, become a regular user who proactively engages with ads so you can understand which forms of ad engagements are meaningful.
Takeaway: As long as your ads have been running long enough to accrue a significant sample size (which I discuss later) of impressions and clicks, be prepared to ignore everything other than cost per user acquisition and the total number of user acquisitions.
And when looking at conversions, be sure to measure the conversions that matter: paid conversions. A signup is one thing, but someone adding their credit card is another. If users don't convert to paid often enough and quickly enough for you to get quick feedback on the cost-per-acquisition of your ads, only then should you use cost-per-signup as your key metric.
There's an exception to this rule: If you’re in the process of optimizing ads and want to study how optimizing intermediary metrics can also maximize total clickthrough rates.
For example, I’ll assess whether a video has a high initial play rate to know whether my video thumbnail image is optimized for clicks. Then I’ll separately assess the site clickthrough performance of the video.
Not everyone who views your ad will arrive at your site by clicking on that ad.
Instead, some may first Google your company name to learn what the press and blogosphere is saying about you.
If these curious searchers then visit your site via the search results (or on one of the press articles), your on-site analytics will track them as having originated from Google instead of the ad channel.
This makes your ads appear to be performing worse than they really are.
This is a real problem, but it doesn’t happen at such a high volume that it significantly skews your ad metrics. But, I’m highlighting this because it contextualizes the inevitability of your on-site analytics failing to fully account for conversion sources.
People who Google you after seeing your ad are only one cause of conversion misattribution. Consider this second example: What if someone sees your ad on their laptop and perhaps does click through to your site, but, they don’t convert right then and there because they’re leaving their office? So instead they visit your site on their smartphone and convert from there.
This too will break ad conversion tracking in your on-site analytics: the referral sources reported for converted users won’t reflect their true origin. (Not to mention you’ll now have two “unique visitors” and only one conversion, which skews conversion rate data.)
Furthermore, in terms of the accuracy of ad channels’ reporting dashboards, they too are fallible: The way conversion tracking works is by using cookies within a singular per-device browser session to track a user across the sites they visit (thanks to a “conversion pixel” you install on your site).
This means if someone first visits your site in a browser session they were using for business work, but then converts in a separate session they use for personal work, this too will cause misattribution.
Again, however, these misattribution sources combined still occur at small rates, so don’t worry too much. Perhaps it’ll comprise 1-15% of your total ad-originating conversions. It’ll depend on the ad channels you’re using. Truthfully, to this day I remain unsure of what a rule of thumb ratio is.
To partially address this misattribution problem, many big ad channels now offer view-through conversion tracking (VTC): VTC identifies anyone who was merely served your ad then at some later point (within a window you can specify between 1 and 30 days) visited your site and converted.
But VTC can still only identify people who convert within the same browser session and on the same device. And if the person is no longer logged into the social network offering VTC at the moment they convert, their conversion won't be tracked at all.
So, VTC will track people who saw your ad then:
… within the 1-30 day time window using the same browser session and device they first saw your ad on. This is enough additional tracking data to recover most of your missing attribution from ad channels.
The three big takeaways here are:
Not understanding the nuances of Google Analytics may be growth marketers’ biggest source of confusion. I can sympathize: GA is an intimidating beast, and it’s boring to master. But you must, or you’re putting your marketing budget at risk with laziness.
Even experienced growth marketers, including myself, connect the wrong analytics dots and jump to false conclusions. It can be confusing to parse the many visitor attributes and segments, and to narrow in on behavioral trends that aren’t misleading.
Takeaway: This isn’t a handbook to mastering analytics, so I’ll refer you back to the link in the previous paragraph: Bookmark it and be sure to go through all its material before running your first ad campaign.
Remember, your ads are only served to a daily sample of your total audience. Even if you have the budget to reach all of them daily:
Because each day is merely a sample, you should expect your ad clickthrough performance to vary daily — perhaps by up to 100% or more. It would be naive to expect them to be consistent. Further, the less you spend per day, the less consistent they’ll be (this is basic statistics).
Marketers (or their bosses) may forget this and prematurely attempt to optimize ads for conversions. In the process, they fail to let those with potential prove themselves.
So what does a sufficient sample size look like? That’s the topic of our next section.