Clark Benedict, Wilming Rick, Haufe Stefan
Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin, Germany.
Technische Universität Berlin, Str. des 17. Juni 135, 10623 Berlin, Germany.
Mach Learn. 2024;113(9):6871-6910. doi: 10.1007/s10994-024-06574-3. Epub 2024 Jul 16.
The field of 'explainable' artificial intelligence (XAI) has produced highly acclaimed methods that seek to make the decisions of complex machine learning (ML) methods 'understandable' to humans, for example by attributing 'importance' scores to input features. Yet, a lack of formal underpinning leaves it unclear as to what conclusions can safely be drawn from the results of a given XAI method and has also so far hindered the theoretical verification and empirical validation of XAI methods. This means that challenging non-linear problems, typically solved by deep neural networks, presently lack appropriate remedies. Here, we craft benchmark datasets for one linear and three different non-linear classification scenarios, in which the important class-conditional features are known by design, serving as ground truth explanations. Using novel quantitative metrics, we benchmark the explanation performance of a wide set of XAI methods across three deep learning model architectures. We show that popular XAI methods are often unable to significantly outperform random performance baselines and edge detection methods, attributing false-positive importance to features with no statistical relationship to the prediction target rather than truly important features. Moreover, we demonstrate that explanations derived from different model architectures can be vastly different; thus, prone to misinterpretation even under controlled conditions.
“可解释的”人工智能(XAI)领域已经产生了广受赞誉的方法,这些方法试图让复杂的机器学习(ML)方法的决策对人类来说“可理解”,例如通过为输入特征赋予“重要性”分数。然而,缺乏形式化的支撑使得我们不清楚从给定的XAI方法的结果中可以安全地得出哪些结论,并且到目前为止也阻碍了XAI方法的理论验证和实证验证。这意味着通常由深度神经网络解决的具有挑战性的非线性问题目前缺乏合适的解决方法。在这里,我们为一种线性和三种不同的非线性分类场景构建了基准数据集,其中重要的类条件特征是通过设计已知的,作为基本事实解释。使用新颖的定量指标,我们在三种深度学习模型架构上对大量XAI方法的解释性能进行了基准测试。我们表明,流行的XAI方法通常无法显著优于随机性能基线和边缘检测方法,将误报重要性归因于与预测目标没有统计关系的特征,而不是真正重要的特征。此外,我们证明了从不同模型架构得出的解释可能有很大差异;因此,即使在受控条件下也容易产生误解。