What is A/A Testing?

A/A testing is the tactic of using A/B testing to test two identical versions of a page against each other. Typically, this is done to check that the tool being used to run the experiment is statistically fair. In an A/A test, the tool should report no difference in conversions between the control and variation, if the test is implemented correctly.

Why is A/A Testing important?

Why would you want to run a test where the variation and original are identical?

In some cases, you might want to use this to monitor the number of conversions on the page where you are running the A/A test to track the number of conversions and determine the baseline conversion rate before beginning an A/B or multivariate test.

In most other cases, the A/A test is a method of double-checking the effectiveness and accuracy of the A/B testing software. You should look to see if the software reports that there is a statistically significant (>95% statistical significance) difference between the control and variation. If the software reports that there is a statistically significant difference, that’s a problem, and you’ll want to check that the software is correctly implemented on your website or mobile app.

Things To Keep In Mind With A/A Testing

When running an A/A test, it’s important to keep in mind that finding a difference in conversion rate between identical test and control pages is always a possibility. This isn’t necessarily a poor reflection on the A/B testing platform, as there is always an element of randomness when it comes to testing.

When running any A/B test, keep in mind that the statistical significance of your results is a probability, not a certainty. Even a statistical significance level of 95% represents a 1 in 20 chance that the results you’re seeing are due to random chance. In most cases, your A/A test should report that the conversion improvement between the control and variation is statistically inconclusive—because the underlying truth is that there isn’t one to find.

Optimizely Stats Engine And A/A Testing

When running an A/A test with Optimizely, in most cases, you can expect the results from the test to be inconclusive—the conversion difference between variations will not reach statistical significance. In fact, the number of A/A tests showing inconclusive results will be at least as high as the significance threshold set in your Project Settings (90% by default).

In some cases, however, you might see on the that one variation is outperforming another or even that a winner is declared for one of your goals. The conclusive result of this experiment occurs purely by chance, and should happen in only 10% of cases, if you have set your significance threshold to 90%. If your significance threshold is higher (say 95%), your chances of encountering a conclusive A/A test is even less (5%).

For more details on Optimizely’s statistical methods and Stats Engine, take a look at How to Run and Interpret an A/A Test.

4,000+ COMPANIES RELY ON ASSET OPERATIONS MANAGEMENT

Leading the Way to a Better Future for Maintenance and Reliability

Your asset and equipment data doesn't belong in a silo. UpKeep makes it simple to see where everything stands, all in one place. That means less guesswork and more time to focus on what matters.

Capterra Shortlist 2021
IDC CMMS Leader 2021
[Review Badge] GetApp CMMS 2022 (Dark)
[Review Badge] Gartner Peer Insights (Dark)
G2 Leader