菜单
技术

Explainability as a first-class model selection criterion

NOT ON THE CURRENT EDITION
This blip is not on the current edition of the radar. If it was on one of the last few editions it is likely that it is still relevant. If the blip is older it might no longer be relevant and our assessment might be different today. Unfortunately, we simply don't have the bandwidth to continuously review blips from previous editions of the radarUnderstand more
Nov 2019
试验?

深度神经网络在很多问题上都表现出了惊人的记忆力和准确性。只要有足够的训练数据和适当拓扑选择,这些模型就能满足并超越某些特定问题域中的人类能力。然而,它们天生是不透明的。虽然模型的某些部分可以通过迁移学习进行重用,但是我们很少能够赋予这些元素人类可理解的意义。相比之下,可解释的模型是一个允许我们说明决策是如何做出的模型。例如,一个决策树产生描述分类过程的推理链。可解释性在某些受监管的行业,或当我们关注决策的道德影响时变得至关重要。由于这些模型被更广泛地合并到关键的业务系统中,因此将可解释性作为模型选择的头等标准非常重要。尽管功能强大,神经网络在可解释性要求严格的情况下,也可能不是一个合适的选项。