Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Reinforcement learning as explainable AI

Reinforcement learning as explainable AI

The adoption of machine learning (ML) tools in business has seen benefits to productivity from augmenting the skills of employees and freeing them up to add value with their creativity. However, the “black box” nature of ML models raises the question of “explainability”: how do we know that our ML tool has made the right decisions for the right reasons? As ML adoption marches towards maturity, we increasingly demand that the reasoning behind ML tools be human-understandable.

 

One approach to explaining ML behaviour is to use our thinking for assessing if people are ready for a certain kind of work. This approach is available for a type of ML called reinforcement learning (RL).

 

This assessment can include the questions:

  • What would you do in these situations? 
  • Can you tell me about a relevant past experience?
  • Have you undertaken training for this?
  • Will this undergo review?

 

If we can draw a parallel between these criteria and features of an ML tool, they can go some way towards explainability.

 

An RL agent is a form of machine learning model which can learn a policy and then carry out a sequence of steps. This policy is used to select actions for various scenarios, much as an employee might follow an operating manual or rely on past experience. Some features of an RL agent include:

  • RL agents can specify an action when presented with a situation,
  • RL agents can supply relevant past experience (if available for that situation) that was used to inform its policy,
  • RL agents can be “given training” by being supplied with an experience of a situation, the action taken, and whether the outcome is desirable.

Business managers can prepare a script of scenarios and expected actions to gain trust in RL agents being deployed in their business workflows without concerns about technical details — analogous to how software testing builds confidence based on behaviours, not internal implementation.

 

Reinforcement learning presents an opportunity to adopt explainable AI by mirroring the requirements on employees and the machine learning tools they use. Has this piqued your interest? Get in touch.

 

Reference:

https://www.frontiersin.org/articles/10.3389/frai.2021.550030/full

 

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Want to unlock your data potential?