As I’ve been researching over the past couple of weeks, it has been rather easy to look at something and come up with a bunch of different tests without any real deep insight into why time and resources should be put into those tests. This webinar changed that. I can now confidently say that I’ll never look at A/B testing the same way again. In the following paragraphs I will attempt to distill what I’ve learned in a concise manner.
The webinar covered three topics on A/B testing. First, the common implementation of A/B testing and its associated pitfalls were discussed. Second, the webinar dove into how to implement a testing regime that focuses on getting large increases in conversion rate. Finally, Lars discussed what tools you need to make the most out of your tests.
How Not to Test
When we think of A/B testing, the first thing that usually comes to mind is testing the colour on your call to action or the layout of various elements on a page. For the amount of work involved, these random tests usually yield a small increase in conversion. If you are testing like this, without insight, you’ve got a low likelihood of seeing a big win. The reason is that your stereotypical red versus green button test is not helping you to understand your customer better. In fact, if done properly, A/B tests can be a great tool to help you understand your customer better and, as a result, help you get higher conversion rates.
How to Win Big with A/B Testing
The key to finding the tests that will make the most impact is insight. There are many ways of obtaining insight, including surveys, user feedback and activity, and usability testing. The point is to develop an extremely deep understanding of your users and how they use your product. Armed with insight, you now have the ability to predict how a particular aspect of your application can be improved. It is at this point that you can use A/B testing to confirm or reject your prediction or hypothesis.
Using A/B testing in this manner allows you to prioritize testing based on user and product insights. In the early stages of a startup, resources are often scarce and, therefore, need to be used effectively. This method allows you to channel your resources into tests that are more likely to yield higher rates because they are based on insight and not random impulse.
At what point, then, is it appropriate to test the small stuff? I think it’s a matter of priority and resources. If you’ve already had lots of big wins and are starting to build a dedicated growth team, it may then be reasonable to start finding smaller things to test. The sum of many small tests can definitely push the needle. Small tests are therefore also effective if conducted at the right time with the right amount of resources.
Making the Most of A/B Testing
One of the limitations of A/B testing tools is that they only really measure how many users/visitors are getting to the next step in some funnel or process. This doesn’t tell you anything about the big picture but we can get around that by tying the testing back to the customer life cycle. This allows us to measure how changes impact customers over the long-term. For example, this integration can tell us how a particular test changes average revenue per user (ARPU) for a given cohort.
In order to do this, you’ve got to use a customer analytics platform that will allow you to integrate your A/B testing platform. KISS Metrics, of course, is very good at this with their integration of Visual Website Optimizer and Optimizely, the biggest A/B testing platforms.
This webinar really put A/B testing into a new perspective for me. I’ve walked away with an understanding of how to use testing as a tool to confirm or reject hypotheses based on insight I’ve gained from users and the product.
UPDATE: I’ve added a link to the presentation in the resources section.