The adoption of machine learning (ML) tools in business has seen benefits to productivity from augmenting the skills of employees and freeing them up to add value with their creativity. However, the “black box” nature of ML models raises the question of “explainability”: how do we know that our ML tool has made the right decisions for the right reasons? As ML adoption marches towards maturity, we increasingly demand that the reasoning behind ML tools be human-understandable.
One approach to explaining ML behaviour is to use our thinking for assessing if people are ready for a certain kind of work. This approach is available for a type of ML called reinforcement learning (RL).
This assessment can include the questions:
If we can draw a parallel between these criteria and features of an ML tool, they can go some way towards explainability.
An RL agent is a form of machine learning model which can learn a policy and then carry out a sequence of steps. This policy is used to select actions for various scenarios, much as an employee might follow an operating manual or rely on past experience. Some features of an RL agent include:
Business managers can prepare a script of scenarios and expected actions to gain trust in RL agents being deployed in their business workflows without concerns about technical details — analogous to how software testing builds confidence based on behaviours, not internal implementation.
Reinforcement learning presents an opportunity to adopt explainable AI by mirroring the requirements on employees and the machine learning tools they use. Has this piqued your interest? Get in touch.
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.