Blog

A Marketer’s Guide to A/B Testing | Vodori Blog

Written by Annalise Ludtke | Jul 22, 2016 5:00:00 AM

You may know the ins and outs of A/B testing websites, or you may just be starting to navigate the sometimes murky waters. In a world of ever-increasing regulations, we believe A/B testing is well worth the time and effort, because when properly done, it gives you confidence that you’re creating the best possible marketing materials and getting the most out of your site.

Here are three scenarios to help you float effortlessly through those potentially choppy waters.

Scenario 1: Wondering if redesigning a key element on your site could increase conversions?

Having two variations of a highly trafficked feature – such as the login/registration or “contact us” page – is the perfect opportunity to test which version users will be compelled to say “Yes, please.”

Do your research

Researching real customer data and analytics will ensure that you are targeting the right areas of your site for improvement. In your research, it’s vital to know your audience by conducting customer surveys and gathering behavioral data through heat maps, eye-tracking, etc. By gathering real data, you can get valuable insight into your user’s goals, habits, and patterns and confidently create testing hypotheses based on solid evidence. Although it may require considerable preparation, doing user research is crucial to ensuring profitable results. A little upfront pain is always worth it when there is the potential for great rewards. Here are some research methods & tools to get you started:

  • Use intercept surveys (via tools like Qualroo or SurveyGizmo) to help understand user intent and objectives. Investigating why users visit/leave your site and what they hope to accomplish will give you key information on which you will base your testing hypotheses.
  • Try usability testing, either in-person or through a remote tool like Usabilla, to uncover interface flaws or confusing interactions flows. A/B testing will not be as successful or impactful if your site offers a fundamentally poor user experience.

Create a strong hypothesis

Just like any true scientific experiment, A/B testing requires a thoughtful hypothesis. A hypothesis is a testable prediction created prior to running a test that clearly states what is being changed (the variable), what the outcome is believed to be (the result), and why that outcome will come to fruition (the rationale). The three components of variable, result, and rationale should easily fit into the framework of “If ____, then ____, because ____.” A strong hypothesis can lead to great problem-solving clarity in several important ways:

  • A measurable hypothesis links user behavior with company goals, providing a solid optimization framework from which to test and benefit.
  • It helps to synthesize your data into a proposal about your visitors’ behavior from which you can test and learn. Each hypothesis – whether it’s confirmed or rejected – generates new insights for future rounds of research.
  • Creating strong, testable hypotheses encourages a culture in which customer data is being actively engaged with, even if it is disproved. As you continue to test and iterate, you will become even more connected to your users and their goals.

*Key Metric: Conversion rate – After a “winning variation” has been implemented, compare the previous bounce rate to that of the redesigned feature. An increase in conversion rate suggests that the new design is successfully engaging users.

Scenario 2: Your product landing pages are trending downward.

With A/B testing you can test variations that are specifically designed to boost viewership. If it sounds daunting, here are some tips on where to begin.

Know the type of change

Before you test, it’s vital to understand the impact of the proposed change on your site. Is it a small-scale or a large-scale change? Are small tweaks needed– such as changing CTA color, rewriting page headlines, etc? Or is larger rework required – such as creating an entirely new layout/design? It‘s important to understand the overall testing scope and know exactly what you’re trying to achieve with your A/B testing:

  • If you’re looking to A/B test with confidence, you should only test one isolated variable at a time. Small, but important page elements, such as headlines, images, and button text, can affect conversion rates and are simple to test. With a small change, it will be clear which performed better and you can confidently implement the element that converts best.
  • If your site has enough traffic to run tests quickly, testing small changes can be extremely beneficial to optimizing user experience.
  • If your site has lower traffic and slower testing cycles, beware the risk of plateauing! The combination of small, incremental changes and minimal traffic is dangerous as it becomes impossible to distinguish the difference between the variation and the control. In this case, you may want to make a large-scale change.
  • If you’re worried you’ve hit your “Local Maximum” (the glass ceiling of your design), try a large-scale change. Instead of testing single design elements like headlines or images, design two completely different pages to test against one another. This way, the entire page becomes the variable you are testing. These large-scale changes could result in faster, more insightful results.

*Hint: It may take some time to create a new design for a large-scale change, but it’s worth it – trust us, we’ve done this before! Unlike small changes, the differences in user experience between two distinct webpages will be much more obvious than the differences between two headlines.

Traffic matters

– With A/B testing, the more traffic your site receives, the better your results will be. Because of this, it’s important to focus testing on high-traffic pages to ensure that the test will have enough participants to produce meaningful results, as well as to understand whether you want to make small or large-scale changes. Because we can’t all be Google, it’s important to utilize these tips to understand your site’s traffic and how to best implement a successful strategy:

  • Use Google Analytics to understand which landing pages get the highest traffic. This will allow you to decide which type of change you need to implement through testing.
  • Use Optimizely’s Sample Size Calculator  as a tool to evaluate results and avoid making decisions based on weak conclusions of under powered tests. If you make a business decision with a low sample size, you run the risk of spending valuable resources with zero realized benefit once you’ve implemented your variation.
  • Even if you don’t have a lot of traffic, there are different A/B testing techniques that can still give you meaningful insight. For example, you can test radical designs or try sequential testing. Creating a radically different variation is a great technique for low traffic sites because users are more likely to exhibit a strong preference for a particular design, garnering quicker test results. Sequential testing is when you test one variation to 100% of users, followed by testing another variation and then comparing the results. This technique essentially allows for double the traffic for each variation.  Be aware that testing radical designs or running a sequential test can be riskier than traditional A/B testing and can not help you identify the specific elements that impacted conversions. These testing methods are geared towards generally understanding your customers better and optimizing their user experience.

*Key Metric: Bounce Rate – Compare the bounce rates between the control and the new design. Lower bounce rates (and higher click through rates) suggest that users are more engaged with the content and are staying within the site to visit other pages.

Scenario 3: Congratulations! You have a new product being released and want to send out an impactful email campaign that will drive traffic to your site. Much potential awesomeness, but how do you maximize the campaign?

Don’t worry: you don’t have to forget everything you just learned. Using A/B testing on an email campaign doesn’t differ from many of the same techniques noted above. Do research, create a strong hypothesis, and understand the project’s scale.

It’s still important to gather user data to decide what would be the best elements to test. For emails, you can use A/B techniques to test the content, personalization, and styling. We also suggest asking for feedback, a great way to create immediate user data.

Email campaigns and newsletters can be an excellent tool to help achieve repeat business, as well as gain new customers; you can improve your conversion rates greatly by optimizing your user experience through A/B testing. Email marketing tools like MailChimp often have built-in optimization features that allow you test content, email scheduling, recipients and more.

*Key Metric: Open Rate – Measure of how many people on an email list open or view the particular email campaign. It’s important to keep in mind that the open rate is only as good as the conversion rate it produces. Continue to track these email users through analytics to understand how they are ultimately interacting with your site.

The Bottom Line

A/B testing is a powerful technique that can provide valuable insight into your users’ behavior, whether you’re a newbie or a veteran. Testing new features, key conversion pages, and email campaigns is a great way to get data that will optimize your site’s performance and help you attract new users. To get the most out of A/B testing, do your research and create evidence-based, actionable hypotheses. Being prepared leads to clear direction and smooth sailing, and will help ensure fruitful results and increase conversions. Good luck and happy testing!

If you’d like to learn more about A/B testing, contact us at hello@vodori.com.