Making changes to your site or eCommerce store without testing them is like target shooting while wearing a blindfold. Unless you test changes, you have no idea how they perform. A/B testing, or split testing, is the processes of making small changes to a site and comparing the performance of those changes to the original. If the new version performs better, it’s integrated into the page, and testing proceeds with other changes.
Ultimately, performance means conversions — the user takes the action we want them to take, whether that’s signing up for a newsletter or buying a product. Conversions are an overarching goal of any business website and conversion rate optimization is the fundamental motivation for A/B testing.
But conversion rates aren’t the only thing we should be measuring. Conversions are, in fact, a fairly blunt tool. They can tell us that a change to a site is not effective, but they can’t really tell us at which point the conversion process failed. Part of that bluntness is mitigated by the limited changes that should be tested in each round, but there is benefit to expanding metrics beyond simple conversion rates.
A bounce occurs when a user lands on a site and then immediately leaves without heading deeper into the site. Bounce rates are the number of bounces divided by the number of visitors. Obviously, bounces aren’t conversions, but they can also be indicative of a serious problem with a page.
Let’s say you’re testing the copy in a hero banner that solicits contact information. A lowered conversion rate compared to the original copy tells you that the changes weren’t effective, but the solution is different depending on whether users leave the site, or head deeper in search of more information.
Knowing that a particular test impacted bounce and conversion rates allows for the formulation of richer hypotheses for future testing.
Exit rates — which are often confused with bounce rates — measure the proportion of users who leave a site without converting, but who have visited other pages on the site. As with bounce rates, exit rates provide richer information than a simple “failure to convert”, and in concert with split testing provide information that can be useful for shaping site architecture, layout, and copy decisions.
By measuring a combination of conversion rates, bounce rates, exit rates, and other metrics that include click-through rates and scroll measurements, testers are provided with contextual information about the nature of a failure to convert, which can contribute to a better understanding of the reasons that users don’t convert and to the development of successful tests in the future.Posted in: Webmaster