Enable javascript in your browser for better experience. Need to know to enable it? Go here.

AI: Augmented – Implementing 21st century decision making

Artificial intelligence suffers from its hype in that it’s built on problematic assumptions about what AI can and cannot do. Our practice shows us that it is possible to design a 21st century human-centric AI-based system using an augmented approach that can outperform machine-centric approaches with far less computing resources and less data.

 

 

The challenge

 

One of the biggest challenges we see with customers who are attempting to leverage AI techniques to create customer value is that stakeholders often hold inaccurate assumptions about what AI can and cannot do. These inaccuracies can inadvertently leave an organisation in a 20th century mindset with respect to decision-making.

 

In practice, we observe an overemphasis on machine-centric approaches such as deep learning, the importance of large and vast datasets, and requirements for using massive computing infrastructure to run all of it. 

 

In contrast, our practice shows that it is possible to design human-centric, augmented AI-based systems that can outperform standard approaches with far less computing resources and less data. This view adds to Thoughtworks’ emphasis on evolutionary organizations. In this article, we outline an alternative  – an augmented approach to AI

 

 

What we see in the field

 

As we engage with our clients, we often see customers attempting to leverage AI techniques to feature engineer outcomes. This term has multiple meanings for both general software development and AI. By “feature engineer”, we refer to the AI meaning, where data scientists select, manipulate, and transform raw data into features that can be used in supervised learning.

 

We see these practices as valuable – but, we also recognize that they have limits. Specifically, when we break up problems along an axis of predictable versus uncertain, we begin to see the value of alternative approaches. For example, we find machine-centric approaches very useful when the problem space is predictable, or to put it another way, linear. In such cases, large data sets may already exist for the problem space and can be quite useful.  

 

However, when the problem space lies more in the space of complexity and uncertainty, different methods are required. In these cases, we advocate approaching the problem using an agile principle of test-and-learn, often leveraging new data. For this class of problem, we advocate an augmented approach, that is, one that emphasizes the domain expertise of humans and positions AI-based computational capability in service of their expert reasoning capabilities.

 

One of the most problematic misconceptions in AI is that we need to harvest all possible data about the problem before we can even try to get any value out. We challenge this view. Does historical data tell us what customers want next? Can you react to changes in your market and customer behavior by looking at the past? Can you run an optimization model for your logistics chain based on how logistics were run yesterday? Can you design a future-proof strategy by looking at last year’s numbers extrapolated to the next? We think not. We believe there are other, more effective approaches that emphasize rapid value creation which are not as well understood by the market.

 

This fundamental misconception of AI is built upon the idea that we can ultimately predict the future as long as we feature engineer enough about the world and the context in which we are making predictions. For example, many AI projects are stuck in a never-ending loop of not having enough data points about the problem at hand to make an accurate prediction. So the resolution is to find more data points: weather, competitor activities and campaigns, market dynamics, socio-economic factors of customers – all in a desperate attempt to find something that correlates with our targeted variable. This has lead to some behavioral antipatterns.

 

 

Behavioral antipatterns

 

We commonly see a number of behavioral antipatterns that can significantly inhibit the value that AI techniques can deliver to organizations and their key stakeholders. These include:

 

  • AI approaches that have been derived from general market media narratives on the topic, leading to a copycat pattern that do little to develop distinctive competencies within the organization;
 
  • An intense focus on “AI use cases”, attempting to focus AI technology on something that could be of value to the organization, often with very little substantive identification of these use cases or production of customer value;
 
  • A preponderance of “pilots” and “proof of concepts” with AI technology that never seem to arrive to any conclusion;
 
  • A focus on harvesting all possible data about the problem space before attempts to extract value are made (leading to);
 
  • AI projects that are stuck in a never-ending loop of not having enough data points about the problem at hand to make an accurate prediction.

 

We see these behavioral antipatterns as all deriving from a common set of problematic assumptions about the predictability of the world based on data from the past. This view leads to a fundamental misunderstanding of the power and benefit AI techniques can provide us. 

 

 

It’s how we’ve been taught

 

Don’t worry, it’s not your fault. Most people are not even aware that they have been somehow led to think that if they gather enough information, they can somehow predict the future, and so end up trying to do just that. 

 

However, it’s just a myth — but it’s a myth that has been in the mind of humanity for millennia. The story of the Tower of Babel is the canonical parable that attempts to demonstrate the futility of attempting to “engineer” our way to the “answer”. In our view, the only way to reliably and consistently find answers of value is through an ongoing and rigorous “test and learn” approach.

 

In the last century, mathematician David Hilbert proposed an approach to addressing foundational problems plaguing Mathematics that involved employing methods of proof to shore up various problems and paradoxes. Unfortunately, mathematician Kurt Gödel later demonstrated – with a proof of his own – that Hilbert’s approach was impossible, simply put, because some things cannot be proven through the method of proof.

 

Herbert Simon wrote about this problem in the context of his idea of bounded rationality, which proposed limits the rationality of human beings in decision-making contexts. Indeed, from Simon’s perspective:

 

“[T]he fallibility of reasoning is guaranteed both by the impossibility of generating unassailable general propositions from particular facts, and by the tentative and theory-infected character of the facts themselves.”

 

Consequently, if we follow Simon’s line of thought, should we choose to offload the work of making rational decisions to computers, that will not address the fundamental problem with the mechanism of reason itself. Instead, we need to rely on an iterative and rigorous approach that holds “test and learn” at the center of decision-making.

 

 

Problematic assumptions in general AI narratives

 

AI has suffered from this problem of attempting to engineer itself to “the answer” from its inception. For example, Marvin Minsky, in the mid-1960s wrote one of the first stories on Artificial Intelligence which ran in Scientific American where he asserted that a computer,

 

“Given a model of its own workings, it could use its problem-solving power to work on the problem of self-improvement [...] Once we have devised programs with a genuine capacity for self-improvement a rapid evolutionary process will begin. As the machine improves both itself and its model of itself, we shall begin to see all the phenomena associated with the terms ‘consciousness,’ ‘intuition’ and ‘intelligence’ itself.” 

 

It is this archetype of the “intelligent” computer, operating on the same level as the human mind, which we find problematic. There are three underlying assumptions here:

 

  1. That it is somehow desirable to have computational machines that are like human minds;
  2. That computational machines are – or can be like – human minds;
  3. And underlying both of these are the “engineering” our way to the “answer” problem we highlighted above.

 

We find these assumptions highly questionable. In our view, we need to amplify the role of human beings as learning beings. To achieve this, we can leverage AI techniques to augment human decision-making in our “habits, routines, and standard operating procedures that firms employ in the conduct of their affairs”. In fact, we see the desire to enact better decision-making as the very core of why someone – or an organization – would want to use AI in the first place.

 

 

Conclusion

 

To summarise, artificial intelligence suffers from its hype in that it’s built on problematic assumptions about what it can and cannot do. These inaccuracies can inadvertently leave an organisation in a 20th century mindset with respect to decision-making.We often encounter an overemphasis on deep learning, large and vast datasets, and requirements for using massive computing infrastructure to run all of it. In contrast, our results with clients demonstrate that it is possible to design human-centric, augmented AI-based systems that can outperform the machine-centric approaches with far less computing resources and less data.

 

From an AI perspective, we cannot feature engineer the world. We cannot know what competitors do at all times nor can we know everything that affects the complexity behind customer behavior. However, we can react to changes in customer behavior and we can do so in real time. We, along with our AI tools, can learn, by using AI tools to facilitate better decisions and to identify valuable cause and effect relationships.

 

Furthermore, by emphasising a machine-centric approach we are missing out on scalable learning. We have never been closer to customers than we are now in the age of digital platforms. Many organizations interact with customers thousands or millions of times per day but those interactions are often not optimized towards learning something new. If done intelligently, we can both optimize for whatever business metric and learn about customers at the same time. These are not mutually exclusive – we have mechanisms available that allow us to learn new things all the while optimizing our businesses and we are missing out on that due to misconceptions of what AI is.

 

Our interest here is in putting forward augmented approaches that allow organizations to better understand new practices for effective use of AI, particularly in the area of decision science in identifying valuable cause-and-effect relationships.

 

We conclude by asserting that AI is not limited to the science of trying to force value out of historical data but includes the art of interacting with the world to make better informed and optimized decisions. This is accomplished through a 21st century approach augmented approach which opens up new search spaces to learn at scale by creating new data. In uncertain times such as these, this is a pragmatic approach to navigating uncertainty.

 

Acknowledgments: The authors would like to thank Jim Highsmith for his generous feedback on this article that helped us to improve it.

How can you achieve faster growth?