Enable javascript in your browser for better experience. Need to know to enable it? Go here.
radar blip
radar blip

Explainability as a first-class model selection criterion

Published : Nov 20, 2019
NOT ON THE CURRENT EDITION
This blip is not on the current edition of the Radar. If it was on one of the last few editions, it is likely that it is still relevant. If the blip is older, it might no longer be relevant and our assessment might be different today. Unfortunately, we simply don't have the bandwidth to continuously review blips from previous editions of the Radar. Understand more
Nov 2019
Trial ? Worth pursuing. It is important to understand how to build up this capability. Enterprises should try this technology on a project that can handle the risk.

Deep neural networks have demonstrated remarkable recall and accuracy across a wide range of problems. Given sufficient training data and an appropriately chosen topology, these models meet and exceed human capabilities in certain select problem spaces. However, they're inherently opaque. Although parts of models can be reused through transfer learning, we're seldom able to ascribe any human-understandable meaning to these elements. In contrast, an explainable model is one that allows us to say how a decision was made. For example, a decision tree yields a chain of inference that describes the classification process. Explainability becomes critical in certain regulated industries or when we're concerned about the ethical impact of a decision. As these models are incorporated more widely into critical business systems, it's important to consider explainability as a first-class model selection criterion. Despite their power, neural networks might not be an appropriate choice when explainability requirements are strict.

Download Technology Radar Volume 29

English | Español | Português | 中文

Stay informed about technology

 

Subscribe now

Visit our archive to read previous volumes