Before implementing a new product feature, startups have a variety of options to gauge their customers’ interest and level of engagement with a new feature. One of them is the painted door test, also known as the fake door test. We talked to our mentors around the globe about the effectiveness of this method in reducing risk for our startups.
The painted door test is a method of early testing to see if and how many customers/ users would engage if you were to build out a new product feature or offering. An example of this would be if a startup creates a button or call to action (CTA) on their website that leads to this new feature of their product or service. The catch is that the button was purely created to gauge interest in a very specific (new) product feature and the results would allow the startup to validate if this is something the customer would truly be interested in. In short: it can be a way to de-risk a new feature rollout without spending too much time or resources.
According to Felix Rompis (Executive Director, Client Services, R/GA), German Accelerator mentor based in Singapore, the applicability of painted door testing spans across industries; however, there is not a common consensus amongst practitioners of such a test’s reliability, credibility, and ultimately value. That being said, it may still be something good to have in a startup’s arsenal and repertoire. There may be a few reasons to explain the indecisiveness of this kind of testing. First and foremost, companies really need to know their customers well before teasing a non-existent feature of their product or a new product itself. Most of these painted doors tests start by having a hypothesis of customers’ reactions. This then has to be tested; sometimes several times in order to make a data-driven decision. Product development is an iterative process. But if the validation phase is not well planned, and desired metrics are not defined or communicated well, it could alienate customers. A poorly planned testing phase can also yield insignificant insights when testers react to the language of a “button” rather than the feature itself.
Experiments like this can not only be helpful for the product development, but also for the team working on it. It’s de-risking a proposition while strengthening the “failure muscle”. Like we’ve heard from founders and mentors before, failure is part of the startup culture and having a positive attitude towards it by looking at the learnings from experimenting has the potential to lead to great things. It may also lead a company to be able to pivot quickly if the data of product testing validates hypotheses.
Sometimes it may make sense to combine a few different ways of testing, in addition to the painted door test, in order to find out even more information about users’ or customers’ behavior. For example, blending the painted door test with A/B testing or heat maps where it is applicable. A/B testing runs an experiment with different variations which is then statistically analyzed of what worked better for viewers or users, whereas heat maps are used as a visualization tool, for example, to see which parts of a website got the most attention. German Accelerator mentor Sebastian Müller (Co-Founder & COO, MING Labs), shares that at his company they conduct a qualitative desirability study by prototyping the full user flow with a design tool and performing a moderated or unmoderated user test with a sample group if a feature is too chunky to A/B test. This is followed up by discussions about the reasons a feature is liked or not. Conversations with customers can be a simpler way to provide essential data and help make a decision on a product or feature development.
In summary, the simple advice still holds true for startups: listen to your customers. If your customers don’t see the value in the new product or feature, startups either need to do a better job of convincing them of the benefits or more likely build a different feature/product that resonates with their target audience.