Once enough time has passed, you’ll be ready to evaluate the results of your A/B test.
Here are a few things you should look for when reviewing your results.
A Clear Preference
We mentioned statistical significance earlier, and that idea bears repeating. When you’re looking over your test results, you should be trying to find data that shows a clear preference by your visitors, or very visible results.
Don’t make the mistake of looking at percentages alone. A 33% increase in conversion rate may sound pretty impressive, but if your sample size is so low that this only represents two additional leads, it’s really not compelling evidence.
Notable Sample Sizes
When you end an A/B test, it should be because you’ve accumulated a fairly high number of visitors who have been exposed to each variation.
Depending on the size of your site, you may find that 1,000 visitors is a good sample size, and demonstrates a clear preference. But if you have a larger site with many variations, you may have to wait until you hit 100,000. The exact number isn’t all that important, but a test with small sample sizes may give you misleading results.
It’s usually recommended that you wait at least a month before drawing any conclusions from an A/B test. So if you know what your average monthly traffic is, you can use this to come up with a rough estimation of how many visitors you’ll need before ending the test.
Try to be aware of any trends or broader conclusions that you may be able to draw from your test, especially if it isn’t your first one[R1]. Even one test on a single form or button may teach you lessons about your visitors and how they interact with your site.
For example, you may find that shortening the length of a landing page boosted conversions significantly. Could it be that the other pages on your site are too long as well? Maybe you aren’t adding calls to action early enough?
Finally, it’s up to you to ensure that the elements you’re testing will provide you with results that are actionable. If you’re testing button colors, it’s easy enough to say “this variation won, so let’s make all of our buttons green!” But if you choose to test multiple elements at once (or do so by accident), it’s not as easy to decide what actions you should be taking.