Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Last updated : May 19, 2020
NOT ON THE CURRENT EDITION
This blip is not on the current edition of the Radar. If it was on one of the last few editions, it is likely that it is still relevant. If the blip is older, it might no longer be relevant and our assessment might be different today. Unfortunately, we simply don't have the bandwidth to continuously review blips from previous editions of the Radar. Understand more
May 2020
Trial ? Worth pursuing. It is important to understand how to build up this capability. Enterprises should try this technology on a project that can handle the risk.

Over the past year, we've seen a shift in interest around machine learning and deep neural networks in particular. Until now, tool and technique development has been driven by excitement over the remarkable capabilities of these models. Currently, though, there is rising concern that these models could cause unintentional harm. For example, a model could be trained inadvertently to make profitable credit decisions by simply excluding disadvantaged applicants. Fortunately, we're seeing a growing interest in ethical bias testing that will help to uncover potentially harmful decisions. Tools such as lime, AI Fairness 360 or What-If Tool can help uncover inaccuracies that result from underrepresented groups in training data and visualization tools such as Google Facets or Facets Dive can be used to discover subgroups within a corpus of training data. We've used lime (local interpretable model-agnostic explanations) in addition to this technique in order to understand the predictions of any machine-learning classifier and what classifiers (or models) are doing.

Nov 2019
Assess ? Worth exploring with the goal of understanding how it will affect your enterprise.

Over the past year, we've seen a shift in interest around machine learning and deep neural networks in particular. Until now, tool and technique development has been driven by excitement over the remarkable capabilities of these models. Currently though, there is rising concern that these models could cause unintentional harm. For example, a model could be trained to make profitable credit decisions by simply excluding disadvantaged applicants. Fortunately, we're seeing a growing interest in ethical bias testing that will help to uncover potentially harmful decisions. Tools such as lime, AI Fairness 360 or What-If can help uncover inaccuracies that result from underrepresented groups in training data and visualization tools such as Google Facets or Facets Dive can be used to discover subgroups within a corpus of training data. However, this is a developing field and we expect standards and practices specific to ethical bias testing to emerge over time.

Published : Nov 20, 2019

Download the PDF

 

 

English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

Subscribe now

Visit our archive to read previous volumes