Enable javascript in your browser for better experience. Need to know to enable it? Go here.

If we want to use algorithms responsibly, we need to prioritize explainability

The potentially dangerous social consequences of artificial intelligence (AI) and machine learning (ML) are well-documented. From recruitment tools that show bias against women and other communities, to facial recognition that leads to an innocent black man’s arrest, there are many examples of such automated systems harming people.

 

One of the driving forces behind these incidents is a false belief in the supposedly objective judgment of algorithms. It’s all too easy to believe that algorithms can solve any problem in a perfect and unbiased manner if enough data is thrown at them. But the truth is they reflect the very biases and unfairness contained in the data on which they are trained. As technology becomes increasingly central to decision making, we have to learn how to approach and apply technology in an ethical way.

 

How well do we understand algorithms? 

 

When it comes to algorithms, we need to start asking some serious questions: do we understand their logic well enough to be able to guess the predictions ourselves for any given input? Do we at least understand it enough that given a prediction, we can explain what part of the logic leads to it? Both of these questions are central to the concept of model explainability. This blog post discusses what ‘model explainability’ is, and when and why it should be a priority.

 

What is explainability?

 

Explainability is the extent to which humans can understand (and thus explain) the results of a model. This is in contrast to the concept of ‘black box’ models, where even their designers don’t really know the inner workings of a trained model and cannot explain the subsequent results.

 

As we delegate more complex tasks to machines, it becomes imperative to closely monitor the algorithms that make the critical decisions which could affect people’s lives. These algorithms are only as good and fair as the data used to train them and the cost functions they were taught to optimize for. Unjust practices and unintended biases can creep into algorithms and can only be brought to light if they are explainable. 

 

A lack of explainability means algorithms go unchecked until it's too late – when too many lives have been unfairly affected. Many such instances are described in Cathy O'Neil’s book ‘Weapons of Math Destruction.’ One such case is of a teacher assessment tool called IMPACT, developed in 2007 in Washington, D.C. It was supposed to use data to weed out low-performing teachers. But due to the way ‘low-performing’ was defined by the designers of the tool, there occurred many unfair instances of good teachers being fired. And because of the lack of transparency for users and other stakeholders, along with misplaced trust in the algorithm, its faults were identified too late.

 

Another interesting case is that of predictive policing. Police forces use algorithms trained on historic crime data to predict possible criminals who are then frisked or checked. Since historic crime data is already biased due to unfair and discriminatory practices against some communities, the model causes those same groups to be checked more frequently. This, in turn, increases chances of them being caught when compared to other more affluent communities, affirming the model’s assumptions.

As we delegate more complex tasks to machines, it becomes imperative to closely monitor the algorithms that make the critical decisions which could affect people’s lives.
Satish Viswanathan
Head of Social Impact, Thoughtworks India

Who is accountable for algorithmic bias?

 

When the unfair decisions of a black box algorithm are finally brought to light, who should be held accountable? The stakeholders or the engineers who coded the algorithm? This is a complex topic, but it is nevertheless critical if we are to ensure we are using algorithms in a responsible manner.

 

Indeed, different approaches and levels of explainability might be required. Technical teams who design the algorithms, for example, would benefit from a more detailed and in-depth technical understanding of how an algorithm works and the limitations and assumptions of the data set on which it was trained. Other stakeholders may not need the same levels of technical detail but they still require enough information to know what they are being held accountable for.

 

For any given problem that can be solved using a machine learning algorithm, there is usually more than one way to solve it. The mathematical formulation of the problem statement, the algorithm used, the data preprocessing steps – all can be chosen from a variety of options. Usually, the main consideration when choosing the approach is the accuracy of measures of model performance against the historic data or ground truth. 

 

Augmenting explainability

 

We also have many ways to make an ML solution explainable. But not all methods to improve explainability are applicable to all algorithms. The complexity (difficulty to explain) of different algorithms is also varied. Sometimes a more complex algorithm could give us better results in terms of model performance against ground truth data. In such a scenario, how does one decide on which algorithm to use? Does one use the complex algorithm that gives better performance metrics or the one with worse metrics yet remains simple enough for non-technical stakeholders to understand? 

 

The answer will depend on certain characteristics of the problem statement. In a case where many people could be negatively affected, the risk outweighs the gains. The explainability of the algorithm must be prioritized even at the cost of performance. 

 

A good example of this is ‘resume robots.’ Businesses are today increasingly using algorithms to filter out incompatible resumes. They argue that it saves time, especially when they receive hundreds of applications for every vacancy. However, due to the risk of excluding certain groups of people from job opportunities, algorithmic explainability becomes critical. Choosing an easily explainable model — like decision trees, for example — should be considered over a more complex model like a multi-layered neural network-based classifier. Even if the decision tree leads to some incompatible resumes not being filtered out, it is ultimately a far better choice than a model that unfairly filters out deserving candidates.

 

Since the choice of the algorithm itself is one of the factors that decides how explainable the solution can be, we emphasize that it's essential to view explainability as a first class citizen when designing AI/ML solutions. It certainly should not be left to the end of a project and treated as little more than a nice-to-have.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Find out how we're helping the world manage the social impact of technology