Written by Alex Birkett

The best optimizers know that testing isn’t complete guesswork. No, the top optimizers in the world test the right things, the things that will have the largest effect on the bottom line and the best chance to win. They aren’t clairvoyant, but they are more successful at finding these things than your average Joe. How do they do it?


They all have a process for conversion research and test prioritization.

Now, test prioritization is a whole-nother matter, but I can show you a simple conversion research process that will exponentially increase the efficiency of your testing program. It’s based on our own ResearchXL model, developed from years of experience and thousands of tests run.

Here’s the gist of it:

Start With Heuristic Analysis

Generally, you’ll want to have a seasoned conversion optimization pro do a ‘heuristic analysis’ of your site to start out with. A heuristic analysis is an experience-based assessment where the outcome is not guaranteed to be optimal, but might be good enough. Its main advantage – speed. It can be done fairly quickly.

This is the closest to an opinion we get in optimization. In addition, we use frameworks to guide our heuristic analysis. Typically, we analyze for:

  • Relevancy
  • Clarity
  • Value
  • Friction
  • Distraction

Most importantly, with this step, realize that what we identify is by no means objective truth – it’s simply an “area of interest.” In our next phases of research (qualitative and quantitative), we seek to validate or invalidate the findings.

Then Audit the Current State of Your Website

After the heuristic analysis, we audit the current analytics setup and data. This includes multiple facets, depending on the tools you have available. The first step: technical analysis.

Technical analysis aims to identify some low hanging fruit in regards to page-speed,  cross-browser and cross-device functionality, and general debugging. If things are broken, fixing them is one of the easiest steps to boosting conversions.

Then, of course we dive into digital analytics (most people’s least favorite step). From this we can learn:

  • What people are doing
  • The impact and performance of every feature, widget, page
  • Where the site is leaking money

The first thing to think about here is whether or not your analytics software is set up correctly. You probably have Google Analytics installed, but even if you’re using something else, it’s important to set things up correctly. You’d be surprised – almost all the analytics set-ups we see are broken.

To check whether things are broken or not, you need to do an analytics health check. In a nutshell, a health check is a series of analytics and instrumentation checks that answers the following questions:

  • “Does it collect what we need?”
  • “Can we trust this data?”
  • “Where are the holes?”
  • “Is there anything that can be fixed?”
  • “Is anything broken?”
  • “What reports should be avoided?”

 

Note that you should have an expert set this up, either in house or an analytics implementation consultant. It’s worth the investment.

Now after you’ve got a good grasp on your analytics and technical health, you’ve gotta start probing for issues.

Find the What

First thing’s first, set up all of your CRO tools. The classic suite includes mouse tracking (heat maps), session replay videos, form analytics, and on-site surveys.

From heat maps, you can gather a ton of high-level insight, though be careful not to get too carried away with the colorful maps.

Heat Map - Usabilla

Image Source

What can you learn with heat maps:

  • Click maps are great at communicating issues. You can clearly see if things that are supposed to be getting attention actually are.
  • You can get a sense of the general hierarchy of attention on the page. Note that your digital analytics tool is much better at telling you this.
  • If people are clicking things that aren’t links, either make them links or don’t make them look like a link.
  • Scroll maps are great for designing long landing pages. How far are people actually making it? Make sure you don’t bury crucial content or CTAs below the point of mass drop-off.

 

What you can’t really do much with is hover maps. They tend not to correlate well with eye tracking, so they stir up more bias and misdirection than actual insight.

The real value is in session replay videos. These are recorded visitors of anonymous users actually using your site. With these you can quantify bottlenecks and get a good idea of what’s stopping people from converting (it takes a long time to analyze – half day or so – but it’s worth it).

Then you have form analytics, which complement your digital analytics package, but – surprise – hone in on form completion. They can tell you where people are dropping off, error rates, and other insightful things. There are tools, likeFormisimo, specifically for this.

Analytics - Usabilla

Image Source

Find the Why

It’s likely you’ve got some repetitive issues surfacing. So far, we only know what’s wrong though. We haven’t uncovered any qualitative data – which is essential to customer insight and understanding the why.

For this, the classic suite of conversion optimization techniques includes:

  • On-site surveys
  • Customer surveys
  • Interviews
  • User testing

 

If you read our previous article on the subject, you should know all about on-site surveys. But for a refresher, on-site surveys let you gather visitor feedback as they’re going through your site, meaning you can quickly identify what’s working and more importantly, what’s not.

Economist on-site survey - Usabilla

The importance here, is that the data isn’t as subject to consistency bias andpost-purchase rationalization (if you’re asking people after the purchase, these tendencies are inherent). Instead, you’re gathering their thoughts in the moment – which seem to support a more objective idea at what their frustrations with your site are.

Read more about on-site surveys in our previous article.

Then we have customer surveys, which answer a whole different question than on-site feedback tools. Customer surveys draw upon your current customer base for insight, so there’s always the selection bias that these people have already agreed to do business with you.

There are lots of ways to mess these up (mostly due to not attaching business goals to your survey design), but when done right, customer surveys can help you answer a lot of questions:

  • Who are these people? What are their common characteristics? Can we form a hypothesis on some different customer personas?
  • What kind of problem are they solving for themselves?
  • What are the exact words they use? You can steal exact phrases from this for your copy, essentially having the customer write copy for you.
  • How would they prefer to buy?
  • Can you uncover any insights about their emotional state?

 

Then you have interviews – which are similar to customer interviews but are far more subjective and qualitative in nature. They’re also far more targeted, as you’re not trying to reach an adequate, representative sample, but you’re trying to learn issues that you might not have even thought about.

A common approach we use is to jump on a phone call with customer support or sales people. They deal with customers all day, every day, and tend to hear the same frustrations over and over again. How can we solve these problems?

Finally, we do user testing. User testing is maybe the best way to watch customers struggle with your site in real time. You walk a user through a series of tasks (might I recommend, some broad tasks and some specific), and get to watch them complete it and comment on the process. Nothing teaches you empathy for your users quite like a good round of user tests.

testing-reality - Usabilla

Image Source

Putting it All Together to Run Better Tests

The purpose of all this research isn’t just to learn what your users are struggling with, but to attempt to fix these things, and then validate the new variation’s efficacy through A/B testing.

Putting together the quantitative issues (what’s broken, where’s it broken, etc.) with the qualitative findings (what do our customers really want, why are they frustrated, etc.) is how you optimize your conversion optimization process. The real magic is when you start running tests and feed the insights gained from those back into new tests.

How do you combine the insights? Let’s say, for example, you see on Google Analytics that, on mobile, only 17% of people are making it from the product page to the cart. There’s a substantial drop-off, and it’s not reflected on desktop. It’s just on mobile.

You know now where the issue is, but you don’t know why. So you run user tests along with heat maps and customer surveys. You watch through a few hundred session replay videos. And what do you discover? People aren’t finding the CTA button, perhaps it’s too low on the screen and not prominent enough. And people are confused by the conflicting product offers. It might be worth testing a new default purchase setting and simplifying the copy.

Without the qualitative, it would have been a guessing game trying to figure out why people were dropping off. Qualitative research didn’t give you the answer, but it gave you great insights for your tests.

Conclusion

Do both qualitative and quantitative research; combine them to form better A/B tests. Don’t be afraid to invest time and muscle into conversion research, because it pays off. The testing programs that have a rigorous process such as this see more wins and they see bigger wins