A/B Testing
What is A/B Testing, and what do we offer
A/B testing can be helpful in determining the most effective nudges or campaign designs. A/B testing is a process in which we compare two or more than two variants of a digital nudge or campaign design to see which one performs better in goal event conversion. We distribute the users randomly among the variants and measure how many complete the goal event for each of the variant.
Let's say we want to encourage users to sign up for a newsletter. We might create two versions of a nudge: one with a simple message and a bright orange button and another with a longer message and a blue button. We would randomly show each version to users from different groups and measure how many sign up for the newsletter.
The results of A/B testing allow us to make data-driven decisions, about, which nudges are most effective. We can then use this information to optimize our nudges and create better user experiences.
Using A/B testing on our platform, we can help you create digital nudges that drive behavior change and improve user outcomes. We provide the following features of A/B testing:
Ability to experiment between design variations
Ability to experiment between flow variations
Target Experiment Group selection: Randomized selection of the target group for the experimentation. Ideally, experimentation need not be done on all users.
Variants Split along with Control Group: Can choose the variants’ split among the target users.
Completely Randomized Variant Allocation: The variants will be allocated to users in a randomized manner so that the results will not be biased. Once a user allocated a variant for a campaign, he will not be shown any other variant throughout the campaign period. It avoids the cascading effect.
Control Group is Mandatory for A/B Testing
A/B testing is a method that compares two variations of a feature, message, advertisement, or other user experience. However, many people overlook the importance of a control group. In reality, A/B testing should be called A/B/C testing, where "C" represents the control group. If a business is looking for the best solution for a new implementation, it should definitely use A/B testing. However, for the experiment to be effective and achieve real success, measuring outcomes against a benchmark is absolutely essential and the control group is the most effective benchmark to utilize.
Goal Event Selection: To decide the winner of the variants, choose a goal action that indicates the success of the campaign. It helps in measuring the effectiveness of the variants.
Auto / Manual Publish Winner: Once the experiment is done, one can opt for rolling out the winner variant to all the users in case of the statistically significant winner is concluded. Otherwise, the winner variant will not be published.
A/B Results: We use the Bayesian A/B to the Frequentist approach as it has its own advantages, especially when smaller samples are available. For each variant, the following are available
Campaign Viewed Users
Converted Users (did perform the goal event)
Conversion Percentage
Overall Improvement over the Control Group
Chance to be a winner
Chance to beat the control
Expected Loss
Expected Improvement
All the other metrics that are for a campaign
Getting Started with A/B Testing
Creating an A/B Test campaign in Apxor is short and simple. Click on the A/B icon in the top right corner.
In the pop-up that appears, enter the name for this variant of design and click on Add
Click on the Black "plus" ➕ icon to add another variant.
Select the variants from the top bar and start adding templates to it.
Last updated