Learn / Guides / Product experimentation guide
9 product experimentation tools for better A/B testing
Product experimentation is more than just A/B testing a button's color: it’s a virtuous cycle that starts with measuring what’s happening in your product, finding things to improve and build on, testing out improvements, and learning what worked.
In this chapter of our product experimentation guide, we’ll take you through nine different experimentation tools to add to your product stack to diagnose, ideate, test, and evaluate optimizations.
Experiment with confidence
Hotjar's product experience insights help your team confidently decide which experiments to run—and why.
What A/B testing tools won’t show you
On their own, A/B testing tools only show you which variation is winning, not why.
To design better experiments and impactful feature updates, you need to understand the deeper motivations behind user behavior and avoid ineffective solutions like guesswork, chance, or the opinions of large semi-aquatic mammals (aka HiPPOs—highest paid person’s opinion).
Another drawback of A/B testing is that it can take time to generate statistically significant results (especially when you have low traffic). And when you're part of an agile product team and need to make iterative changes, that’s time you don’t have—you’re likely eager to understand what’s working (and not) straight away.
With Hotjar, we usually have enough data within a week to make a fix. Whereas with A/B testing, it would take up to three months to validate our hypothesis.
So what’s the ideal solution? Instead of focusing on A/B test data in a vacuum, you can combine test results with qualitative insights (the kind you get from session recordings, surveys, and feedback tools); this ‘best of both worlds’ approach allows you and your team to optimize your product for user experience and business goals.
2 free product experimentation tools to get started with
Product experimentation doesn’t have to be expensive: small teams, startups, and even complete beginners can collect user insights and A/B test variations for free by using these two popular tools together.
1. Google Optimize
Google Optimize is a free website experiment platform that lets you run A/B, multivariate, redirect, personalization, and banner split tests to compare their performance and improve goal conversion. Anyone with a Google account can sign up, and Optimize integrates with Google Analytics, Google Ads, and select third-party insights tools (including Hotjar—more on this in a second), making it easy to set up and combine metrics from multiple sources and add qualitative user data to your experiments.
The optional Google Optimize Chrome extension also provides a visual editor for creating test variations and previewing experiments without coding knowledge.
Pro tip: integrate Google Optimize with Hotjar Recordings to see how users browse, click, and scroll when in Optimize experiment variants, giving you clearer insight into why a winning variant is successful.
2. Hotjar
Hotjar (hi there!) is product experience insights software with a free plan for up to 1,050 sessions/month and four main tools:
Heatmaps: to see where users click, scroll, and move their mouse
Recordings: to view how users browse across an entire session
Surveys: to ask users direct questions
Feedback: to let users tag, rate, and comment on any page element
Hotjar is designed to give you actionable product insights so you can spot common areas of friction in the user experience, get ideas for improvements, and measure if your fixes are working (and why).
While Hotjar is not an A/B testing tool, it’s integrated with some big names like Google Optimize and Optimizely, and works with most usability testing software by using JavaScript Event targeting to filter insights by test variation.
3 popular A/B and multivariate testing tools to set up experiments
Split testing is still a big part of the product experimentation cycle, and you’ll need the right software to get it done: here are three popular A/B testing tools that let you create, measure, and report on experiments.
1. Optimizely
Optimizely is a digital experience platform with multiple tools, including web, server-side, email, and B2B commerce experimentation. With Optimizely’s popular Web Experimentation tool (previously known as ‘Optimizely X’), you can run A/B and multivariate tests and personalization campaigns to improve conversion rates and user experience.
If you’re using Hotjar: Hotjar has a one-click integration with Optimizely so you can view recordings and trigger surveys in test variations.
2. Omniconvert
Omniconvert is an experimentation tool used by product and ecommerce teams to create A/B tests, personalization experiments, pop-ups and overlays, and trigger on-site surveys.
Omniconvert’s main A/B testing tool is called ‘Explore’ and offers advanced segmentation to create experiments for specific user cohorts. You can segment by over 40 parameters, including geolocation, on-site behavior, and traffic source, to run experiments that only target a specific audience.
If you’re using Hotjar: Hotjar integrates with Omniconvert so you can view session recordings and heatmaps for Omniconvert experiments, and see the real user behavior behind winning (and losing) variations.
3. VWO
VWO is a conversion optimization platform for creating A/B, multivariate, and split tests. VWO’s main experimentation tool is called VWO Testing and is intended for marketing and conversion rate optimization teams to create test variations without coding using a visual editor.
Another tool, VWO FullStack, is designed for product teams to run server-side A/B tests without impacting performance. VWO FullStack also has a feature rollout tool, allowing you to release new features to small groups and measure user impact before launching it to all customers.
4 tools to get test ideas and see why your A/B tests are winning
Split testing starts with solid hypotheses founded on good insight. These four tools will help you collect foundational insights and understand why winning tests succeed, so you can double down on what’s working and create a virtuous cycle of data-backed optimization.
1. Heatmaps
Heatmaps give teams an at-a-glance overview of what users are doing in a product, making them a great starting point for developing testing hypotheses.
Once you’ve run an experiment, heatmaps will give you a deeper insight into why one variation outperformed another. For example, using Hotjar Heatmaps, you can monitor A/B tests on all variations and compare results to get a clear, visual overview of how click and scroll activity differed. Plus, you get to showcase results to team members and stakeholders—it’s hard to argue with a heatmap!
See heatmaps in action: the team at Bannersnack, an online banner tool company, used Hotjar Heatmaps to gather evidence about how people interacted with a key landing page.
They used heatmap insights to create a new design to A/B test, which resulted in a 25% increase in sign-ups. The heatmap on the right clearly shows a reduction in ‘wasted’ clicks on other page elements after user attention was focused on the CTA button.
Hotjar heatmaps on Bannersnack’s A/B test variations
Hotjar… gives us the reasonable, user-backed foundations for our experiment hypotheses, and lets us demonstrate how effective we’ve been through our redesigns at producing KPI improvements.
2. Recordings
Session recordings (aka user replays) are renderings of the real actions an individual visitor takes as they go through a session on a product or website, from entry to exit. Since they measure all visitor actions, recordings are great for finding where people hesitate or get stuck, giving you ideas for fixes and improvements to test.
Recordings can also help you understand why some split testing experiments don’t result in a winner, and give you qualitative insight from low-traffic tests that may not have a statistically significant sample size.
If you’re using Hotjar Recordings, you can use Event filters to view session replays from any A/B test variant and see what users did throughout their session, and how their collective behavior affected your test results.
See recordings in action: the team at Spotahome, an online home rental platform, combines A/B testing with Hotjar Recordings to see how users interact with new features before rolling them out sitewide.
Sara Parcero, Customer Knowledge Manager, also invites everyone in the product and engineering team to regular Hotjar parties to view recording highlights, and note down the bugs to fix and opportunities to prioritize.
The Spotahome team watching Hotjar Recordings (with pizza 🍕)
3. Surveys
Surveys are a quick and easy way for teams to collect feedback directly from users. Knowing what real users actually think about and need from your product helps you build experiments around user experience and reveals issues and opportunities that might come as a surprise to your team.
If you’re using Hotjar Surveys, you can set up Events targeting to trigger surveys within A/B test variations and collect valuable qualitative data during experiments. Using A/B test surveys alongside quantitative data will give you a fuller picture of not just what users did (e.g. where they clicked), but what they thought while doing it (e.g. do they understand what a new feature does?, do they like the new color scheme?, is there something they’re missing?).
See surveys in action: the team at HubSpot Academy used Hotjar Surveys to create an exit intent survey for users leaving a course landing page. Three common issues kept coming up in survey responses, giving the team the data and confidence they needed to design and A/B test a new page template, resulting in a 10% uplift in conversions.
“The changes made from the Hotjar survey gave us enough confidence to begin designing the new page template, which we then A/B tested to get to the final version.”
–Eric Peters, Senior Growth Product Manager at Hubspot
The simple Hotjar survey used by HubSpot Academy
4. Feedback
User feedback is any information collected from customers about their experience with a product or service. Feedback can be quantitative, i.e. numerical, like a Customer Satisfaction (CSAT) score, or qualitative, like an open-ended question response.
Hotjar’s Feedback tool collects both quantitative and qualitative feedback: users can highlight any page element, rate it, and provide additional comments if desired. You can use Hotjar Feedback in A/B test variations without interrupting the user journey with a pop-up, giving you more context behind how users feel about design changes or new features.
See Hotjar's Feedback tool in action: the team at Hussle, a gym subscription service, used Hotjar Feedback on a product sign-up page and received feedback from a few users who were unable to register.
Instead of contacting each individual, the team at Hussle only had to search by Hotjar User ID and view relevant Recordings to see that a small section of users were trying to sign up for corporate subscriptions without payroll numbers. A quick change to the error message fixed the problem in just over a week.
A Hotjar Feedback response on Hussle’s homepage
Experiment with confidence
Hotjar's product experience insights help your team confidently decide which experiments to run—and why.