Posted by lkolowich
You just ran what you thought was a really promising conversion test. In an effort to raise the number of visitors that convert into demo requests on your product pages, you test an attractive new redesign on one of your pages using a good ol’ A/B test. Half of the people who visit that page see the original product page design, and half see the new, attractive design.
You run the test for an entire month, and as you expected, conversions are up — from 2% to 10%. Boy, do you feel great! You take these results to your boss and advise that, based on your findings, all product pages should be moved over to your redesign. She gives you the go-ahead.
But when you roll out the new design, you notice the number of demo requests goes down. You wonder if it’s seasonality, so you wait a few more months. That’s when you start to notice MRR is decreasing, too. What gives?
Turns out, you didn’t test that page long enough for results to be statistically significant. Because that product page only saw 50 views per day, you would’ve needed to wait until over 150,000 people viewed the page before you could achieve a 95% confidence level — which would take over eight years to accomplish. Because you failed to calculate those numbers correctly, your company is losing business.
Miscalculating sample size is just one of the many CRO mistakes marketers make in the CRO space. It’s easy for marketers to trick themselves into thinking they’re improving their marketing, when in fact, they’re leading their business down a dangerous path by basing tests on incomplete research, small sample sizes, and so on.
But remember: The primary goal of CRO is to find the truth. Basing a critical decision on faulty assumptions and tests lacking statistical significance won’t get you there.
To help save you time and overcome that steep learning curve, here are some of the most common mistakes marketers make with conversion rate optimization. As you test and tweak and fine-tune your marketing, keep these mistakes in mind, and keep learning.
Equating A/B testing with CRO is like calling a square a rectangle. While A/B testing is a type of CRO, it’s just one tool of many. A/B testing only covers testing a single variable against another to see which performs better, while CRO includes all manner of testing methodologies, all with the goal of leading your website visitors to take a desired action.
If you think you’re “doing CRO” just by A/B testing everything, you’re not being very smart about your testing. There are plenty of occasions where A/B testing isn’t helpful at all — for example, if your sample size isn’t large enough to collect the proper amount of data. Does the webpage you want to test get only a few hundred visits per month? Then it could take months to round up enough traffic to achieve statistical significance.
If you A/B test a page with low traffic and then decide six weeks down the line that you want to stop the test, then that’s your prerogative — but your test results won’t be based on anything scientific.
A/B testing is a great place to start with your CRO education, but it’s important to educate yourself on many different testing methodologies so you aren’t restricting yourself. For example, if you want to see a major lift in conversions on a webpage in only a few weeks, try making multiple, radical changes instead of testing one variable at a time. Take Weather.com, for example: They changed many different variables on one of their landing pages all at once, including the page design, headline, navigation, and more. The result? A whopping 225% increase in conversions.
When you read that line about the 225% lift in conversions on Weather.com, did you wonder what I meant by “conversions?”
If you did, then you’re thinking like a CRO.
Conversion rates can measure any number of things: purchases, leads, prospects, subscribers, users — it all depends on the goal of the page. Just saying “we saw a huge increase in conversions” doesn’t mean much if you don’t provide people with what the conversion means. In the case of Weather.com, I was referring specifically to trial subscriptions: Weather.com saw a 225% increase in trial subscriptions on that page. Now the meaning of that conversion rate increase is a lot more clear.
But even stating the metric isn’t telling the whole story. When exactly was that test run? Different days of the week and of the month can yield very different conversion rates.
For that reason, even if your test achieves 98% significance after three days, you still need to run that test for the rest of the full week because of how different conversion rate can be on different days. Same goes for months: Don’t run a test during the holiday-heavy month of December and expect the results to be the same as if you’d run it for the month of March. Seasonality will affect your conversion rate.
Other things that can have a major impact on conversion rate? Device type is one. Visitors might be willing to fill out that longer form on desktop, but are mobile visitors converting at the same rate? Better investigate. Channel is another: Be wary of reporting “average” conversion rates. If some channels have much higher conversion rates than others, you should consider treating the channels differently.
Finally, remember that conversion rate isn’t the most important metric for your business. It’s important that your conversions are leading to revenue for the company. If you made your product free, I’ll bet your conversion rates would skyrocket — but you wouldn’t be making any money, would you? Conversion rate doesn’t always tell you whether your business is doing better than it was. Be careful that you aren’t thinking of conversions in a vacuum so you don’t steer off-course.
One of the biggest mistakes I made when I first started learning CRO was thinking I could rely on what I remembered from my college statistics courses to run conversion tests. Just because you’re running experiments does not make you a scientist.
Statistics is the backbone of CRO, and if you don’t understand it inside and out, then you won’t be able to run proper tests and could seriously derail your marketing efforts.
What if you stop your test too early because you didn’t wait to achieve 98% statistical significance? After all, isn’t 90% good enough?
No, and here’s why: Think of statistical significance like placing a bet. Are you really willing to bet on 90% odds on your test results? Running a test to 90% significance and then declaring a winner is like saying, “I’m 90% sure this is the right design and I’m willing to bet everything on it.” It’s just not good enough.
If you’re in need of a statistics refresh, don’t panic. It’ll take discipline and practice, but it’ll make you into a much better marketer — and it’ll make your testing methodology much, much tighter. Start by reading this Moz post by Craig Bradford, which covers sample size, statistical significance, confidence intervals, and percentage change.
Just because something is doing well doesn’t mean you should just leave it be. Often, it’s these marketing assets that have the highest potential to perform even better when optimized. Some of our biggest CRO wins here at HubSpot have come from assets that were already performing well.
I’ll give you two examples.
The first comes from a project run by Pam Vaughan on HubSpot’s web strategy team, called “historical optimization.” The project involved updating and republishing old blog posts to generate more traffic and leads.
But this didn’t mean updating just any old blog posts; it meant updating the blog posts that were already the most influential in generating traffic and leads. In her attribution analysis, Pam made two surprising discoveries:
Why? Because these were the blog posts that had slowly built up search authority and were ranking on search engines like Google. They were generating a ton of organic traffic month after month after month.
The goal of the project, then, was to figure out: a) how to get more leads from our high-traffic but low-converting blog posts; and b) how to get more traffic to our high-converting posts. By optimizing these already high-performing posts for traffic and conversions, we more than doubled the number of monthly leads generated by the old posts we’ve optimized.
Another example? In the last few weeks, Nick Barrasso from our marketing acquisition team did a leads audit of our blog. He discovered that some of our best-performing blog posts for traffic were actually leading readers to some of our worst-performing offers.
To give a lead conversion lift to 50 of these high-traffic, low-converting posts, Nick conducted a test in which he replaced each post’s primary call-to-action with a call-to-action leading visitors to an offer that was most tightly aligned with the post’s topic and had the highest submission rate. After one week, these posts generated 100% more leads than average.
The bottom line is this: Don’t focus solely on optimizing marketing assets that need the most work. Many times, you’ll find that the lowest-hanging fruit are pages that are already performing well for traffic and/or leads and, when optimized even further, can result in much bigger lifts.
When it comes to CRO, process is everything. Remove your ego and assumptions from the equation, stop relying on individual tactics to optimize your marketing, and instead take a systematic approach to CRO.
Your CRO process should always start with research. In fact, conducting research should be the step you spend the most time on. Why? Because the research and analysis you do in this step will lead you to the problems — and it’s only when you know where the problems lie that you can come up with a hypothesis for overcoming them.
Remember that test I just talked about that doubled leads for 50 top HubSpot blog posts in a week? Nick didn’t just wake up one day and realize our high-traffic blog posts might be leading to low-performing offers. He discovered this only by doing hours and hours of research into our lead gen strategy from the blog.
Paddy Moogan wrote a great post on Moz on where to look for data in the research stage. What does your sales process look like, for example? Have you ever reviewed the full funnel? “Try to find where the most common drop-off points are and take a deeper dive into why,” he suggests.
Here’s an (oversimplified) overview of what a CRO process should look like:
As you go through these steps, be sure you’re recording your hypothesis, test methodology, success criteria, and analysis in a replicable way. My team at HubSpot uses the template below, which was inspired by content from Brian Balfour’s online Reforge Growth programs. We’ve created an editable version in Google Sheets here that you can copy and customize yourself.
Don’t forget the last step in the process: Conduct a follow-up experiment. What can you refine for your next test? How can you make improvements?
One of the most important pieces of advice I’ve ever gotten around CRO is this: “A test doesn’t ‘fail’ unless something breaks. You either get the result you want, or you learned something.”
It came from Sam Woods, a growth marketer, CRO, and copywriter at HubSpot, after I used the word “fail” a few too many times after months of unsuccessful tests on a single landing page.
What he taught me was a major part of the CRO mindset: Don’t give up after the first test. (Or the second, or the third.) Instead, approach every test systematically and objectively, putting aside your previous assumptions and any hope that the results would swing one way or the other.
As Peep Laja said, “Genuine CROs are always willing to change their minds.” Learn from tests that didn’t go the way you expected, use them to tweak your hypothesis, and then iterate, iterate, iterate.
I hope this list has inspired you to double down on your CRO skills and take a more systematic approach to your experiments. Mastering conversion rate optimization comes with a steep learning curve — and there’s really no cutting corners. You can save a whole lot of time (and money) by avoiding the mistakes I outlined above.
Have you ever made any of these CRO mistakes? Do you have any CRO mistakes to add to the list? Tell us about your experiences and ideas in the comments.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Author: Hally Pinaud
Creating and maintaining buyer personas has been an important task in every role I’ve held as a marketer. Why is that? Personas–when built and used correctly–are a very effective way to channel real empathy for your buyers. That empathy makes it easier to drive winning strategies across the customer lifecycle through campaigns, content, nurture paths, account plans, and sales collateral.
They also happen to be one of the things I speak with our customers about most frequently–hence this blog post! So, whether you’re looking to create your first persona or double-check your approach, here are four things that can limit the impact of your personas:
Have you spoken with your personas lately? No, I’m not talking about some kind of weird, talking-to-a-PDF kind of activity. I mean, have you interviewed the people who would correspond to each persona’s defining factors, specifically to validate that persona? From what I’ve observed, this is one of the most common mistakes when it comes to creating personas.
These “lab grown” personas stem from assuming you know your personas well enough without external validation. Maybe because your organization is pretty open and you have good proximity to prospects and customers. Or maybe you’ve lived in the persona’s shoes yourself (this is a big one–it’s something I struggle with here at Marketo). Lived experiences are valuable, but me, myself and I is a limited and biased sample. Customer and prospect pools are inherently exclusionary.
Luckily, it’s easier to fix than you think: send out some emails and set up some 30-minute interviews. Start with a handful of people–a mix of customers, prospects, and total strangers who look like your persona–and ask them about the details your persona documents. Pro tip: It can be tough to find willing strangers to interview, but a combo of colleagues’ networks, LinkedIn InMails, and $50 Amazon gift cards will get you anywhere.
Hey there, persona hoarder. I see you. You made that great persona and you’re using it to drive your messaging and marketing programs, aren’t you? But have you walked your demand generation team through the persona they’re creating nurture programs for? What about sales or customer success? Have you printed it out so they can tape it to the inside their decks like a Leonardo DiCaprio poster circa 1997? (Always an option.)
Your customer-facing colleagues need to exercise those empathy muscles to do their jobs well. If you aren’t sharing your fresh, validated persona knowledge, they’re going to make it up as they go. So, train and retrain on buyer personas often. Ensure they’re easy to find among your internal content resources and welcome questions, contributions, and ideas from folks who deal with these people each and every day. Personas should make us all better at what we do.
A lot of marketers characterize their personas with photos or names. To be clear, those details can be a good thing. It helps humanize a generalized portrait of your buyer and makes it easier for folks on your team to use a persona as a reference point. For example, “Would Emily the Email Specialist want to read this blog post? What tone would she respond to?” The problem I have is when those details run amok.
Emily has a French Bulldog. She drives a Jeep Liberty. She only reads People Magazine when she gets her hair done.
Really? Do those details help your team make better decisions about how to reach Emily? Maybe, if you sell dog sweaters or hair products. Otherwise, elevate your persona details to focus on what will drive business outcomes and catch yourself before you get carried away on the nitty gritty when it doesn’t.
This is an easy one: update your personas! Revisit them every quarter or two, especially if they’re critical personas like a budget holder or key decision-maker. Yes, we’re busy as marketers, but if your personas haven’t been touched since they were researched during the last Winter Olympics, your hopelessly out-of-date Rip Van Persona might not be helpful anymore. In fact, it may be causing more harm than good–buyers’ challenges, goals, and trusted resources can evolve rapidly in the digital age.
4 Big Mistakes You Might Be Making with Your Marketing Personas was posted at Marketo Marketing Blog – Best Practices and Thought Leadership. | http://blog.marketo.com