With Nosto’s A/B Testing & Optimization feature, online merchants can identify which onsite experiences most impact customer conversion by deploying A/B tests across any element on any page, and optimize experiences according to the results.
What is A/B Testing & Optimization and why does it matter to my store?
A/B Testing & Optimization is a way to test different variations of personalized elements and experiences on your website in order to improve performance. Testing is available across a merchant’s website, and can be used to test either a single elements or a series of elements across multiple pages.
So why is A/B Testing & Optimization valuable to me? Well, with A/B Testing & Optimization it’s possible for you to discover valuable insights into how you can improve your customer experience on your website for each individual customer that visits. With the inclusion of Continuous Optimization, you never have to worry about how much revenue you’re losing by allocating traffic to the lesser-performing variations within tests. Meanwhile, Merchandising Insights allows you to derive brand and product specific information that can directly be tied back to your test results.
“Where do I start?” you ask? Remember, it’s always good to have a “why” behind your test – say you notice within your Dashboard & Analytics section that a certain metric is declining, or that a certain Nosto campaign is no longer performing well. When you see that there’s room for improvement, that’s the time to test! So with that in mind, here are 10 tests to start with when first implementing A/B Testing & Optimization.
1. Test and optimize your product recommendations titles.
Title optimization is arguably the quickest and easiest way to first familiarize yourself with the A/B Testing & Optimization feature. By cloning your product recommendations in your dashboard as many times as title variations you’d like to test, you can then implement the product recommendations with the different titles into a test to discover if any of the titles drive more clicks than the others.
To derive the proper insights from this test, you’ll want to be sure to select that you’re optimizing for click-through-rate, and not something more down the line like revenue, since there are numerous variables between a product recommendation click and a purchase that could also be affecting the conversion.
If the title change is subtle, don’t expect a huge uplift, but don’t underestimate the impact either. Copy and context can actually have a huge influence on customer behavior. Take the example to the right, where Atkin & Thyme chose a title that indicates the product recommendations are recommended by an interior designer (labeled “Designer’s Picks”) in order to portray their expertise.
2. Test and optimize your homepage banner
Next, you should test your homepage banner. The majority of online stores have a big hero banner above-the-fold on both mobile and desktop devices, which is meant to immediately draw users’ attention. This is a standard practice in today’s market, and is often even expected by shoppers. With how much space these banners take up, and the prominence of the placement, they typically have a huge impact on visitor behavior (click-through-rate or bounce rate). Therefore, it’s common for a banner test to reach conclusive results. For example, check out these O’Neills’ banners where one showcases general ‘men’s new arrivals’ and the other showcases attire representing Ireland’s rugby team.
In this specific example, it would probably make sense for the more general variation to win over the more specific one, but what if the test ends up not finding a significant difference between the two in terms of click-through rate? This is where Nosto’s A/B Testing & Optimization feature excels beyond similar solutions. If the test appears inconclusive at the macro level, then you can start digging into both Segmentation Insights and Merchandising Insights in order to use the test’s results for more granular learnings. For example, maybe the banner promoting Ireland’s rugby team’s apparel increases the click-through rate drastically for visitors in Ireland, or maybe it doesn’t increase click-through rate but it does increase conversion rate on that specific apparel in terms of shoppers who do click through. This deeper understanding of your different segments can lead to more highly personalized experiences for each of them, and then you can simply curate your homepage banner according to different segments’ affinities.
3. Test and optimize your product recommendation price anchoring
As the name suggests, the goal of upselling is to convince shoppers to purchase a slightly more expensive product than the one they’re currently viewing. Product recommendations are usually implemented for various reasons, upselling being one and color or style variation being another. This large variation in purpose makes product recommendation price anchoring a great subject for your third test.
Let’s explore how Costo used A/B Testing & Optimization to discover which product recommendation price anchoring strategy performed best for them. Costo broke their product recommendation strategy into three separate variations.
In the first variation, no price rules were applied, and consequently recommended products ranged from accessories to all the way to more expensive products.
In the second variation, price range settings were applied to limit the recommendations to products in the same price range. These settings did not eliminate products that were the same price or slightly cheaper than the product whose product detail page they appeared on.
In the last variation, more aggressive price settings are deployed. As a result, only products that were at least +10% more expensive than the current product were recommended.
In terms of visual merchandising and seamless shopping experiences, the second variation is the most consistent with how an in-store set up would look like, and therefore may seem like the right strategy. However, if the goal of the product recommendations is to drive a higher average order value, then the third strategy is the one that would most likely perform the best.
As always, businesses have different objectives, challenges, and product ranges, so instead of guesswork: test which upselling strategy works best for you!
4. Test and optimize the right priority for product page recommendations
A natural follow-up to testing and optimizing your price anchoring strategy for your product recommendations is to test the actual placements of the recommendations. As stated earlier, product recommendations are often used to accomplish multiple different goals; from cross-selling to upselling. For example, if your primary goal for your product recommendations is upsells over cross-sells, then it makes sense to position the more expensive alternative products higher and the supplementary add-on products lower. This type of setup is favored by many retailers, like fashion giant Asos as seen below.
However, shopping journeys differ depending on the vertical and showcasing the upsells before the cross-sells isn’t always the highest performing option. For example, Atkins and Thyme, showcases cross-sell items, such as chairs that match a table, first in order to inspire shoppers to purchase entire sets or collections of furniture.
By testing which product recommendation layout strategy performs best according to your specific goals, you could see the difference between shoppers never buying the add-on products you carry and always buying them.
5. Test and optimize product recommendation design
Keeping with the product recommendation theme, testing different product recommendation designs is a great next step on the road to becoming an expert tester. Once you’ve optimized your price anchoring strategy and your recommendation layout strategy, the only logical next step is to test and optimize the design of the recommendations themselves.
Let’s explore this through three different design examples.
Here, Gymshark showcases a single product, giving the recommendation the most possible on-screen real estate, while alluding to a carousel of other recommendations with just a flick of a finger.
Next, Atkin & Thyme showcases two full recommendations on-screen at once and alludes to more recommendations with the inclusion of pagination indicators.
Finally, Fjellsport combines the two above designs by including two on-screen recommendations while alluding to more with a fading carousel look.
Testing which design leads to more click-throughs, lower product detail page bounce rates, or more cross-sells could help you optimize exponentially more revenue.
6. Test and optimize the number of product recommendations you show
On-screen real estate is always limited, but if a product page doesn’t have many clickable items or visual cues, shoppers might feel stuck on the page. This can lead to frustration and cause shoppers to leave your site altogether. Product recommendations mitigate this issue. This is why a good sixth test is to figure out if your product pages should only feature one product recommendation or multiple.
It’s important to note that the optimal number of recommendations on a page can also vary depending on the layout of those recommendations. For example, more recommendations may be tolerable to shoppers if said recommendations are positioned in unique ways (like the bundle layout you see below).
Optimizing the number of product recommendations showcased on your pages in addition to pricing anchoring, placement, and design can bring you one step closer to delivering a truly seamless product recommendation experience to your customers.
7. Test and optimize your product recommendation algorithm
The next test to implement in your quest for the best performing product recommendations is how different recommendation algorithms perform according to your specific needs. Merchants often ask us if the simple bought together algorithm works better than the score based relationship algorithm. Let’s start by exploring key algorithm options (full glossary here).
- Bought together is a very simple model that recommends products that shoppers actually buy together regularly. In order to work efficiently, there needs to be a fair amount of sales volume, as the algorithm solely takes into account products that have actually been sold in the same order. As we see in the example below, products that are most commonly bought together can vary greatly due to the fact shoppers don’t typically purchase multiple versions of the same product.
- Viewed together is similar in that it recommends products that shoppers explore within the same site visit. This model doesn’t reflect actual buying behaviour, but instead browsing behavior. Typically, the end result is a list of very similar products, so it’s a commonly used option for product page cross-sellers when a customer is arguably still contemplating different options. As we see in the example below, products that are most commonly viewed together are often very similar.
- Relationship score based strikes a balance between these two as it tracks both actions by giving a relatively low relation score for products that have been viewed together and way bigger one for products that are actually bought together. As we see in the example below, this model ends up displaying both different versions of the same product and completely different products altogether.
A rule of thumb for which configuration should be used as the default setting depends a lot on the use case and store. Sites with smaller sales volumes are likely best off with the view and relationship score based algorithm, while sites with high sales volumes can get more out of the bought together algorithm.
That said, cart page recommendations are likely to work better when dictated by the bought together algorithm, while product page recommendations are more likely to perform under the viewed together rules. Like everything else on a website, this can change according to the vertical, store, or even segment. That’s what makes this a great test to run.
8. Test and optimize your product recommendation settings
The final thing to test on the list that has to do with product recommendations are the settings you choose when building your recommendations out. This test is very similar to the recommendations algorithm test in that you are simply trying to figure out what the optimal settings are for the product recommendations that will be displayed on your online store.
Different ‘Best Sellers’ configurations include ‘Most buyers’, ‘Most views’, and ‘Most buys’; which are really just different ways of measuring product popularity. You can change your setting according to the goal you are trying to achieve with the product recommendations. For example, if you’re trying to decrease bounce rate then it most likely makes sense to select ‘Most views’ as your setting, as these products are the ones that most commonly attract shoppers to come to their page. Meanwhile, if conversions are what you’re after then one of the other two options would obviously make more sense.
In addition to being able to showcase variations of popular recommendations, merchants also have the option to test between showing popular products and new products. This may seem like a strange test to run within your overall customer population, but when digging into differences between segments showing new products versus popular one could end up creating a much more inspiring experience for repeat and loyal customers who have already browsed or even purchased the popular items.
Perhaps enabling another setting category, such as profit margin or rating, will drive more shoppers to click through to that next product page. Long story short, there’s an almost infinite amount of different configurations to try and test, so our advice is to bring the topic up with your customer success manager or to start a chat with our product specialists to explore which test setup could be the right one for supporting your immediate business goals.
9. Test and optimize your page narrative flow
According to numerous studies, online shoppers rarely navigate to the bottom of websites, even on mobile devices where thumb-scrolling is natural. This adds a lot of pressure for website designers and business owners to optimize the flow of their page narratives. For example, should a Best Seller recommendation be deployed immediately below your hero banner or would category promo boxes work better like in the example below?
Testing, changing, and adjusting elements for the optimal page flow experience is luckily as simple as changing the placement of your onsite content personalization to see which layout performs better than any of the others.
10. Test and optimize your cart cross-selling strategy
Let’s face it, your cart page is arguably the most critical stage in the shopping journey for your customers. Once a shopper is on your cart page, they either make the decision to proceed with the purchase or they decide to abandon their cart (and your site) completely. Retailers often like to remove any and all distractions from their cart pages in order to minimize second-thoughts, but it’s actually in these moments that shoppers are most vulnerable to cross-selling. This is especially true if the merchant has an assortment of supplementary products that go with the carted product or if the order is just under the amount needed for free shipping.
Arguably, the most significant variable to test here is how different product recommendation settings affect overall cart value. Let’s explore this by showcasing how three different settings affect the performance of recommendations on the cart page.
Costo, known for its headwear made out of recycled materials, has a distinguishable feature, which is that they offer changeable bobbles for their hats. When cart based cross-sellers are set as below, the recommendations showcase products that are often bought and viewed together.
When cart based cross-sellers are configured to the relative price rule, we can immediately see that there are more supplementary products: bobbles. This can still pack a good punch in terms of cart value, assuming a reasonable amount of customers find something interesting.
The third option would be setting up an absolute value, which limits supplementary products almost exclusively to bobbles in Costo’s case. This is likely a safe bet as there’s a small barrier to add an affordable add-on to the shopping cart. As a counterpoint, to make the same sales volume compared to a beanie worth of 42€, Costo would need to sell about 8 bobbles.
As stores and inventories vary, cart cross-selling settings need to reflect the inventory, while the bottom line here is to check which variation would deliver the biggest lift in sales.
Starting with these ten tests is a great way to familiarize yourself with Nosto’s A/B Testing & Optimization feature and by the time you finish them you’ll have improved the performance of your personalized onsite experience so much that you’ll be rushing to start your next test.