Creating a product inevitably involves making assumptions about user behaviour.

While it’s certainly a good tool to have in one’s repertoire, it certainly cannot be relied upon for results. Enter ‘A/B and multi-variate testing’, the go-to approach for product managers looking to optimize their product’s offerings. But how effective is this approach?

Research by eConsultancy & RedEye shows that 60% of surveyed marketers felt that A/B testing was ‘quite valuable’ for their business. However, only 28% reported being satisfied with the resulting conversion rates.

Clearly there’s a disparity that prevents the expected impact of A/B testing from being translated into actual results. So where are we failing when it comes to testing and optimization?

Here are four basic mistakes I have seen a number of product managers commit in this regard.

1. Where’s the Hypothesis?

Any form of testing, be it an A/B or a multivariate test, must be accompanied with a hypothesis that must be validated or disproved. Furthermore, this hypothesis must focus on a defined metric or set of metrics that effectively represent the business result that the company is trying to achieve. Doing so assists product managers in focusing on what matters, instead of being distracted by other potential improvements and so-called ‘vanity metrics’.

Essentially, if the goal of changing the colour of your primary call-to-action button on your landing page is to garner a better clickthrough rate, then that is the critical KPI metric to be used to understand the results of your A/B test, not other metrics such as sign ups or subscription rate (which are important nonetheless).

2. Background Research

While your hypothesis represents an anchor for your tests, creating a hypothesis is an art in itself.

The process of testing must begin with identifying potential pitfalls in the user journey. To do so, one must rely on an array of data sources, including Google Analytics, customer feedback, funnel performance (where applicable), heat maps, and any other usability testing tools available. Additionally, UX research through wireframes and focus groups may further enable product managers to narrow down the exact user interactions that must be prioritized and tested.

An extensive research process directly informs the design of the experiment, not only in terms of the parameters being tested, but also how these tests may be conducted iteratively for continuous improvement.

3. Your Optimization Tool is faulty

While this is a hard one to digest, it is a critical issue that may affect the efficacy of your testing.

To begin with, most optimization tools tend to slow down page load time. This can be dire for young startups fighting for eyeballs. In fact, as per WP Engine, just one second of additional load time, on average, results in an 11% decrease in page views and 7% decrease in conversions.

Moreover, there may be additional distortionary effects within each optimization tool that may influence the results of an experiment. Therefore, though slightly boring, it’s best to begin using a new optimization tool by first conducting a complete A/A test. This essentially means testing the page against itself for a statistically significant number of users, and ensuring that the two versions produce approximately the same result.

If the results are wildly different, your optimization tool is faulty.

4. Testing with Context

As product managers, we must understand that experiments for A/B or multivariate testing form a part of an ecosystem of factors influencing product. Business objectives, which may be open to change, would directly affect the parameters that we are trying to test and improvise.

Therefore, it is vital for us to continuously ask ourselves whether our testing strategy continues to make sense given the current objectives the company is pursuing. Are we improving the correct pages/components? Do we need to alter the pace at which we plan to iteratively improve a feature? Are we utilizing environmental factors (user-related or industry-related) to understand our results?

While there are several other factors that may also need to be addressed, the four aforementioned missteps must be avoided to not only improve testing methodology, but also view tangible results.

So what challenges have you seen with A/B testing, and what have you done to alleviate the resulting issues? Share with us in the comments below!

 

Thanks for reading The Low Down (TLD), the blog by the team at Momentum Works. Got a different perspective or have a burning opinion to share? Let us know at [email protected].