Learn / Product Forge / Article
How to create a culture of product experimentation to deliver what users want
Digital products offer you a wide range of options to solve a customer’s problem. Unfortunately, it can sometimes be difficult to find the solution that your customers really want.
You need a reliable way to home in on the right solution.
Enter: product experimentation.
Imagine you're on a product team working on a shopping list app for a grocery store chain, and you want to increase the number of people in your Loyalty Program.
You’ve learned from feedback that people would find the app more useful if they could add specific items to their shopping list (a brand of tomato paste that your store sells) instead of just typing generic text (tomato paste).
Your team talks through the problem and comes up with several ideas, including adding the capability to search the store's product catalog, autocompleting when someone starts typing, or even uploading recipes.
The team shares a lot of good ideas, but you aren’t sure which one will work best. You need some way to find out what your customers want—and what they'll use.
So you experiment.
You decide to build a simple product catalog search because it’s quick to implement, and you think it will increase the number of people who use your app. Whether the experiment increases users or not, you’ll learn something and have more insight into the best way to help customers get value out of your shopping list app.
In this scenario, you addressed one challenge. Imagine what you could accomplish if you incorporate experimentation across your organization and establish a product experimentation culture.
Here’s a look at what a product experimentation culture is and how you can adopt one for your team.
Do you need solid evidence to pick the right solution for your customer’s problems?
Hotjar helps you understand how users feel about your product experiments so you can make the right product decisions for your customers.
What a product experimentation culture is and why it’s important
A product experimentation culture exists when a product team can accept the uncertainty in product development and is excited to try different methods and product iterations to discover solutions.
A product team with an experimentation culture doesn't see experimentation as an option of last resort—they see it as one of the most powerful user research techniques they can use when they need to truly understand customer behavior.
When your whole organization has a culture of experimentation, anyone can present new ideas to solve a customer problem regardless of their team. Leaders encourage experimentation because they’re willing to accept the occasional 'failure' to experience the innovation and learnings from testing experiments. Leaders also encourage a culture of experimentation by favoring data from experiments over opinions as the basis for decision-making.
🌎 Real-world example: Booking.com has built an especially effective culture of experimentation—they run nearly a thousand tests simultaneously and over 25,000 tests a year by establishing core tenets such as “anyone at the company can test anything—without management’s permission."
Their rapid and continuous experimentation has helped the company become the world’s leading digital travel company.
Why is a product experimentation culture important?
When you create a culture of product experimentation, you encourage your product team to be more curious and willing to iterate on their ideas and initiatives. A culture that encourages experimentation doesn't see 'failed' experiments as something to avoid, but rather as learning opportunities.
A product experimentation culture also drives product teams to better understand your customers and their needs. When you run frequent experiments focused on learning what your customers will and will not use in your product, you’ll gain a great deal of empathy for their needs and increase your chances of creating customer delight.
Product teams should include experimentation as a regular part of their product development activities so you can:
Learn: product teams can collect user behavior data and get meaningful, actionable feedback about the impact of their product changes. (More on how to get this feedback later.)
Create customer delight: ongoing experimentation means you can continuously improve the product experience (PX) to deliver a product that truly delights your customers.
Prioritize brilliantly: when you run several small experiments on an ongoing basis, continuous discovery helps you double down on the right changes and manage your product backlog.
Mitigate risks: frequent small experiments help you avoid the risks involved with complicated releases. You can test MVPs or a series of small changes to understand how each one impacts customer experience individually.
Support your assumptions with data: share actual experimentation results to back up your assumptions. As a result, you can give your stakeholders confidence that you know what your customers are looking for.
How to build a thriving product experimentation culture
You’ve decided that experiments need to be a foundational part of your product development process, but now you're wondering how to make it part of your product culture.
Here are some steps you can take to ensure experimentation becomes an essential technique for your team to create a product that delights your customers:
Establish teams trained in experimentation
Establish product teams made up of individuals who are excited to experiment and see experimentation as a vital part of product development.
Make sure each product team includes a product trio consisting of a product manager, product designer, and tech lead. This trio ensures that the people who perform discovery and come up with the experiments are looking at the product from the perspectives of value and viability (product manager), usability (product designer), and feasibility (tech lead).
These three team members will focus on activities intended to discover insights about your customer’s needs—and how your product can satisfy those needs—and will include experiments of different types based on the information you’re trying to uncover.
But even though the product trio are the primary discovery people on your team, everyone needs to understand:
How to create and conduct experiments quickly
When to use experiments
When enough is enough (you have sufficient data to make a decision)
To help spread this understanding, include some people on your team who have experience working with product experimentation data. These folks can coach your team through running product experiments—and allow your team to recover from 'failed' experiments. This will enable your team to learn from the experience without discouraging them from trying ideas that may be a little out of the ordinary.
Set up your data properly
To have a meaningful and successful experiment—where 'success' is defined by your team having learned something new about how your product impacts the customer experience—you need to start with a proper data setup.
A proper data setup includes knowing:
The specific metric you’re trying to impact and how you will measure that metric
The value of those metrics before you start the experiment
The goal you want to reach for the metric
The value of the metric once the experiment is over
For example, if you’re trying to impact product conversion rates, you need to know:
What the current conversion rate is—that's your baseline
What conversion rate you’d like to get to—your target
Once you know your baseline and target, you can start testing changes to your product and see how each change impacts the conversion rate. When you find a change that allows you to hit your target, you know you can stop the experiments for this goal and move on to experiments for the next one.
To measure the results of your experiment though, you need to have access to actionable, relevant data so you can use it to inform your decision-making.
The more data you have related to the experiments you do, the more qualitative and quantitative evidence you’ll have to inform your decisions. That’s why when you set up your experiment you need to be explicit about how you'll measure the metric and where you’ll get that data. You may find you have existing dashboards or reports that will provide the information you’re looking for—or you may find that there’s some additional work you have to do to get the data you need.
Pro tip: product manager Kent McDonald uses the following approach to help product teams build a clear understanding of the metrics they’re trying to move.
For each metric his team works on, he specifically defines a key set of attributes:
Name: a unique name for the metric that everyone involved understands—for example, conversion rate.
Units: describe what you’re going to measure; this is often a definition of the name—for example, users who purchase your product (i.e. users who convert).
Method: a clear explanation of how you measure the metric. For example, divide users who purchase your product in a month by the total number of users in that month.
Baseline: the current value of the metric. For example, 1%.
Target: the target value of the metric you want to achieve within a specific timeframe. For example, increase to 2% within a quarter.
Constraint: the value of the metric you’re trying to avoid. If your metric hits this value, that’s an indication that the changes you’re making are having a negative effect. For example: from 1% to .75%.
Be willing to embrace failure
Accepting failure is an essential aspect of product experimentation: if your team is afraid to fail, they won’t try different experiments and test out new ideas. Your team needs to be comfortable taking recoverable risks to gain more insight into your customers’ needs.
Some experiments will go just as you expected, but many more will yield surprising results. Those results are only 'bad' if you don’t learn anything from them.
Because experiments gauge the impact of product changes on your customer’s experience, a failure occurs when a change you made has an adverse effect on the customer experience.
But it’s not only your team who needs to accept failed experiments—you also need to establish the expectation with everyone in your organization that the best way to ultimately get your product just right is to occasionally get it wrong. When you have that shared expectation, you’ll have more support to do the experimentation you need to do.
Embed experimentation into your organization’s DNA
A side benefit of setting the expectation that your team will occasionally run failed experiments is that people in your organization will become more accepting of experimentation in general. And when people inside your organization accept your experimentation, they're more likely to try experiments themselves. Once that happens, experimentation will spread across the entire company.
You can spread acceptance of experimentation in your company through some well-placed cross-functional collaboration. Involve people outside of your team in brainstorming sessions to identify potential experiments, determine which experiments to run first, and identify potential risks.
To further spread acceptance of experimentation activities in your organization, share regular updates on your experiments and their results—whether they are expected or unexpected. The key isn't to paint the experiment results as success or failure but to share what you learned and the impact the experiment had on your organization.
Challenges of experimentation culture
Because experimentation involves the risk of failure, there are some challenges to look out for, including:
Testing with the wrong metrics: when you form experiments to impact a metric that isn't tied to improving the customer experience, you spend time making changes that may not drive progress to your ultimate goals.
Falling for false positives: if you start an experiment with a particular hypothesis, and early results appear to confirm it, don't stop the experiment without understanding what the outcome was telling you. A little skepticism is reasonable when you evaluate results.
Experimenting without enough traffic: if you stop your experiment before you get a sufficient number of results, you may not fully understand how the change impacts your entire population. You may have only had the chance to see a certain subset of customers that share the same characteristics. When you stop your experiment before you have a representative sample, you may not be able to see the impact of your change on all the key subsets of your customer base.
But keep in mind: while these challenges are real, when your organization has a culture of experimentation, you can view these risks as learning opportunities rather than reasons not to experiment.
How Hotjar helps with successful experiments
To successfully run a product experiment, you need to collect user behavior data and get meaningful, actionable customer feedback about the impact of your product changes.
Hotjar can get you the data and feedback you need and help you understand why your experiment succeeded or failed. Here's how:
Heatmaps
Heatmaps show you, in aggregate, how people use pages of your site after a change. You can see where people spend the most (or least) time on each page and notice how they interact with different areas and elements.
For example, let’s say you ran an experiment to add a CTA button at the top of your sales page to increase signups, but the new button didn’t impact your conversion rate.
You can look at a heatmap of the sales page to see if people were spending a lot of time in the area where you put the button. If you notice that people hang out in the area where you added the button but don’t see a lot of clicks, you can then test changes on the button itself to increase clicks. If you notice people don’t hang out much in the area you put the button, heatmaps will help you identify other possible places for the new button based on people’s usage patterns.
Pro tip: use Hotjar Heatmaps to collect actionable insights when running A/B tests.
Set up a separate heatmap to track each page if you’re running an A/B test with different URLs. For example, if your control page is company.com/test-control and your variation page is company.com/test-variation, set up two heatmaps—one for each of them:
One heatmap with the targeting rule Contains: test-control/
Another heatmap with the targeting rule Contains: test-variation/
You can use Events for Heatmap Targeting if you’re running an A/B test in the same URL. For example, if your A/B test randomly loads different content each time a new person lands on the page, you can use Events tracking to track people’s movements in the appropriate heatmap based on the variation of the page they're sent to.
Now, to dig even deeper and understand why the new button didn’t have the impact you expected, you can watch session recordings:
Session Recordings
Session Recordings show you how individual users navigate and experience your site, from page to page. You can see how they move their mouse, where they click, where they get stuck or experience blockers like broken links or elements, and watch for indications of pain points like rage clicks and u-turns.
Using the same example from above, after watching recordings of sessions on your sales page, you may realize that people spend time around the CTA button but don’t click on it. This may be an indication that there’s something else influencing their decision, and could be a sign that you need to experiment with the text or design of the button.
Surveys and Incoming Feedback
There are times when you need more information from your users, even after watching how they use your site. This is especially the case when you want to know why they behave a certain way on your site, or you want some insight into their thoughts or feelings when they come across your experiment.
That’s where Hotjar's Surveys and Incoming Feedback come in handy. For example:
Add a feedback widget to a page with an experiment to find out how your users feel about the changes.
Add a survey to a page with an experiment when you have specific questions you want to ask people who've just experienced a change on your site.
When you run an A/B test and want to compare people’s reactions to both versions in the test, you can set up a survey on both pages, ask the same questions, and compare the results. This provides insights into the impact on users' thoughts and feelings by both variations—something that isn’t readily apparent from seeing their actions.
Do you need solid evidence to pick the right solution for your customer’s problems?
Hotjar helps you understand how users feel about your product experiments so you can make the right product decisions for your customers.