All Collections
Recommendations
FAQ
On-Site Recommendations: A/B-Testing
On-Site Recommendations: A/B-Testing

Comparing Nosto with Nosto and Nosto vs no-Nosto

Dan Macarie avatar
Written by Dan Macarie
Updated over a week ago

A/B testing (also sometimes referred as split-testing) is a method of comparing two versions of a webpage or any given system against each other to determine which one performs better based on pre-defined key performance indicators.

In e-commerce and more precisely when it comes to the topic of conversion rate optimization (CRO), it is common practice to test different elements of a website such as the overall design of pages, the color of CTA buttons, the banners, or the way pricing and shipping strategies are displayed, to give some examples.

Nosto’s personalized product recommendations can be tested in order to define the optimal set-up. As a hypothesis here, a small change in recommendation settings can yield a positive performance impact in terms of increased increased click-through-rates (CTR) and ultimately conversions.

If you wish to test Nosto versus a different set-up (e.g absence of Nosto or product recommendations provided by a third-party provider), some guidelines must be followed in order to observe statistically correct KPIs when evaluating the results.

This document provides guidelines how you can  optimise your Nosto recommendation setup with any A/B testing tool. It should serve as a guide on how to configure your test-solution correctly with regards to Nosto so that behavioural tracking and statistics are reported accurately.

Common Technical Mistakes

Before implementing any A/B Test (or multi-variate tests), please be aware of the following common technical mistakes, which should be avoided

  • Removing the full Nosto script from some variations. This causes problems with data processing and recommendation learning, Nosto statistics and Triggered emails

  • Hiding the Nosto recommendation elements instead of removing them. With this setup Nosto always creates the recommendations and of course assumes that they were shown. This causes discrepancies in Nosto’s behavioral data and skews Nosto’s analytics, for example recommendation click-through rates.

  • Observing large unexplained discrepancies in conversion and revenue tracking when compared to the backend numbers. If there are discrepancies you should make sure what is the root cause of those and if possible, fix them before making conclusions based on the results.

Now that the instructions above are taken into account, let’s explore how to set-up an A/B Test with Nosto accordingly.

Basic Tips For Testing

  • Run your tests long enough. Seasonality can affect dramatically both your test treatments and behaviour e.g weekends or salary days for example.

  • Repeat tests if results are too promising or preferably validate them always when just possible.

  • Make sure that returning visitors experience the same variation as the one they have experienced in a previous visit. The variations should be based on user, typically a cookie, and should not be completely random. 

  • Use correct metrics as KPIs. For example if you are testing just one recommendation on the home page, this is in the beginning of the user journey, meaning that even if a customer would click and add something to the cart, missing delivery or payment options might be the key reasons why a customer did not buy. Since there is many unknown unknowns, in this particular test CTR and perhaps bounce-rate are better metrics to use than conversions.

  • Foremost, have enough of data depending on what you test. If you are using conversions as main metric, you might need thousands to reach enough statistical power for the test to be valid. For obvious reasons, using CTR as a metric produces results faster.

  • If possible, segment your tests. Traffic and visitors from newsletters or ads behave typically very differently to organic or direct traffic. Choose your optimisation targets carefully and use proper metrics, depending what you are optimising for.

  • A proper optimisation strategy is not sprint and not a marathon. It should be part of your organisation’s DNA. Don’t validate your hypothesis of expected positive impact, but research also if your hypothesis and results are negative. Run two-tailed tests instead of one-tailed if applicable.

Implementation Guidelines

If you are using the direct include, you need to first switch to the embed approach. Make sure you have the embed script at the top of the html document. The embed script can be found here.

Move the Nosto initialisation call from the embed into your AB-testing tool. Here are the steps:

1. Remove the nostojs.init(‘[accountID]’); call from the embed script, keep rest of the embed in place.

2. Now setup a test where Original does not show recommendations and Variation does.

3. Go to your AB-testing tool and add the following script block for “Original”:

nostojs.init('[accountID]');
nostojs(function(api) {
  api.experiments([
   {id:'NOSTO_ADD_RECOS',
     name:'Add Nosto recommendations',
     variation:'ORIGINAL',
     variationName:'Original version, no Nosto recommendations’}
  ]);
});

In your AB-testing tool add the following script block for “Variation”:

nostojs.init('[accountID]', {disableRecommendations: true});
nostojs(function(api) {
  api.experiments([
   {id:'NOSTO_ADD_RECOS',
     name:'Add Nosto recommendations',
     variation:'VARIANT',
     variationName:'Variation with Nosto recommendations shown’}
  ]);
});

Ensure your test conversion tracking numbers follow your backend information

You should always compare the number of orders and revenue logged by your testing setup to the authoritative numbers from your backend. If the numbers do not match, then you should make sure you understand why.

Did this answer your question?