Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Blogs Banner

Predicting product success using Neuroscience and the Probability Theory

Which business owner, after dropping a cool couple of billions into R&D, wouldn’t want to predict her latest product’s success and profitability?

Today’s competitive market is home to several organizations that are constantly pioneering new products, solutions and services to maximize value for customers. Some of the most prosperous companies in the world like 3M spend no less than $1.8bn on research. In 2019 alone, Amazon is known to have spent $33bn on exploratory study and development.

According to an Inc article, “Too many startups start building first....” and do not employ the popular tools and methods that are available to help validate or predict product prosperity. These include, market research, user testing, forecasting and value frameworks like Value Disc amongst others.

No one has cracked the winning formulae, yet

Predicting the success of a product is a challenge that complex forecasting models, data-driven innovation and focused group studies have not been able to effectively surmount. Quirk’s, a media research company documented product failure rates in the range of 75 to 95%.

Monetary impact, aside, product failure negatively affects brand image and equity as well. Notorious product innovation failures like New Coke and the Segway are proof of that particular pudding. And, stand testament to the fact that huge companies are not immune to mis-predicting what product variations and innovations will stick with their customers. Tech giants like Google, Apple have also had their share of product failures with the likes of Google Glass and Apple Newton, respectively. 

What about the banking sector's race to develop the most successful product?

LinkedIn, Capgemini and the non-profit, Efma released a World FinTech Report that stated; 50% of the world’s banking customers use at least one fintech product or service. The digital natives are acing product thinking by meeting customer needs that traditional players have ignored or failed to address. Needs like end to end transparency, socially responsible investments and goal based investments amongst others. 

I believe that all if not most of the recent banking product initiatives will fall into one of the below given categories, i.e. except for those initiatives that were/are enforced by government-led regulations and compliance policies - 

Categories of banking products

Proactive fintechs trump profit seeking incumbents

The fintechs’ approach has been to pick one problem and solve that in a comprehensive manner.
The approach? Unbundling, the great disruptor.

Challenger banks leverage machine learning and user centric design to build a significant customer base. They, then, identify similar customer challenges and bundle new products to their existing offerings. A case in point would be the unicorn fintech, Robinhood that created a mobile app which lets users buy and sell U.S. stocks and ETFs without any trading fees. On the back of that offering’s success, Robinhood is now providing customers with the opportunity to open checking accounts. Another fintech, Affirm is a buy-now-pay-over-time alternative to traditional credit cards.

But, don’t incumbents own oceans of historical data. How are they still failing?

Despite the availability of data-led insights and advanced analytical tools, traditional companies' products’ high failure rates continue to persist. One of the key reasons; business leaders lean on expensive market research when identifying new initiatives. And, the problem with this approach is the foundational difference in how business leaders and market researchers process business decisions. 

According to a Seidewitzgroup study, market research may actually be killing company growth. Experimenting and failing fast are great ways to learn and gather feedback but they are not economical strategies. For example, Amazon spends billions of dollars on experiments that might not even see the light of the day. 

David Ogilvy, British advertising tycoon, founder of Ogilvy&Mather, and the ‘Father of Advertising’ is known to have said, "The problem with market research is that people don't think about how they feel, they don't say what they think, and they don't do what they say.” Meaning human beings’ biases do significantly affect market research findings. 

An example of this is mentioned in the book, Choice Factory: 25 behavioural biases that influence what we buy. The book shares an experiment where people are served brownies placed on napkins, paper plates and china plates. While the brownies are all from the same batch, the people still rated the napkin-wrapped brownies with the lowest taste scores. Second came the brownies on paper plate and highest scoring were the dessert served on china plates. According to Richart Sutton, the book’s author, there are 192 known biases that exist in the world.

Early product discovery, done right

Marty Cagen, widely recognized as the primary thought leader for technology product management, is the founder of the Silicon Valley Product Group. He talks about Continuous Product Discovery as a way to assess products. The framework used for said assessment would evaluate a product against four possible risk dimensions - 
  • Usability - can users figure out how to use the product
  • Value - will customers buy the product or will users choose to use it
  • Feasibility - can engineers build what is needed with the time, skill and technology at hand
  • Viability - i.e Business viability. Does the product enjoy endorsement from the organization’s other business units and support functions like marketing, sales, backoffice etc.
Initiatives that pass the above mentioned risk assessments are further developed into Minimum Valuable Products, MVP. 

Evolution of market research: Neuroscience

Nobel Laureate Daniel Khaniemen talks about how the mind processes information in two distinct ways - System 1: the brain’s fast, automatic, intuitive approach. And, System 2: the mind’s slower, analytical mode, where reason dominates. It is the first that often dictates the second.

Given human biases are a fact, the recommendation is to only consider System 1 reactions. Because, what one says might end up using System 2 thinking while what one does is driven by System 1 thinking. 
While such methodologies that measure our emotions or implicit attitudes are not yet mainstream, there is a slowly increasing interest to better understand the human mind and its reaction to encountering product promotion. Companies like Nielsen have setup Consumer Neuroscience labs to understand why we do what we do, and what drives us to do the same. 

Interestingly, UberLabs applied behavioral science to launch the feature/product ExpressPool. Another experiment by Cornell psychologists studied how limiting the purchase quantity of soup led to more sales. The behavioral science of limiting user behavior was leveraged to create a perception of scarcity and goodness. 

The above examples are indicative of how concepts from Neuroscience and Psychology are improving the average of predicting product success. Infact, a Neuroscience based approach helps evaluate some of those risk dimensions like value and usability (according to Marty Cagen’s framework).

Even Physiology is being explored to help ascertain products’ market standing. An interesting space within this science is Galvanic Skin Response, GSR which measures active and passive electrical signals, and related characteristics of skin. GSR or Skin Conductance helps recognize neuro-physiological changes associated with stress, excitement, engagement, frustration and anger. 

This method is especially useful in solving qualitative research related challenges, which incidentally form a significant part of User Testing (a part of the product discovery and testing phase). Skin conductance is not under conscious control but is modulated autonomously by sympathetic activity that subconsciously drives human behaviour.

Are banks’ prediction models really as formidable as marketed?

Historical data helps identify patterns, however, using the same data to build future-ready prediction models can blind organizations to the uncertainty involved. It’s important to ensure that the models are re-calibrated to match changing market dynamics like customer preferences, evolving needs, new competition and business models etc.

There is a great example of Relativity Media leveraging the Monte Carlo Modelling method to predict a movie's success. The mathematical model was something of a novelty and helped production houses win investors’ confidence and secure needed funds. However, the twist was that the prediction model didn’t account for evolving formats of video content like streaming movies. Ultimately, the company had to file for bankruptcy.

It would have boded well for Relativity Media to carry out a Mark to Market evaluation, MTM - a prediction model that delivers measures of fair value that are subject to change over time (such as input variables’ weightage). In the case of financial instruments, market value is reassessed to draw a realistic appraisal of current value according to market perceptions. This could be using a company’s stock price or the current exchange rate of a currency.

The eventuality of probabilistic models

No matter how sophisticated prediction models are, it is the probabilistic models that reveal chances of favourable outcomes. A probabilistic method or model is based on the theory of probability or the fact that randomness plays a role in predicting future events. Such models incorporate random variables and probability distributions into the model of an event or phenomenon.

For example, life insurance is based on the fact that we know with certainty that we will die, but we don’t know when. These models can be part deterministic and part random or wholly random. Another apt analogy is a gaming company that creates one successful product which covers the losses of several failed product attempts. The earlier mentioned fintech, Robinhood’s launch of a checking account offering didn't turn out to be in their favour but, the decision itself, cannot be blamed for the failure. 

I would like to sum up with the combination of these factors, that when observed, will ensure a higher confidence in product and feature launches - 
  • Deeper customer understanding should start involving scientific approaches like GSR to avoid the pitfalls of human biases - thereby improving data quality and reliability
  • Data-driven prediction models should be periodically re-calibrated to both internal and external data
  • A risk based approach to evaluating business opportunities should account for value, usability, viability and feasibility of the product
I believe that most, if not all product launches are akin to placing bets with a hope for higher odds of success. Business leaders have to acknowledge BlackSwan moments that can occur in spite of all the planning and research that goes into a new product launch. 

This quote from Nassim Nicholas Taleb, author of books like BlackSwan and Fooled By Randomness seems apt - “No matter how sophisticated our choices, how good we are at dominating the odds, randomness will have the last word.” 

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights