Farahani Farzad V, Fiok Krzysztof, Lahijanian Behshad, Karwowski Waldemar, Douglas Pamela K
Department of Biostatistics, Johns Hopkins University, Baltimore, MD, United States.
Department of Industrial Engineering and Management Systems, University of Central Florida, Orlando, FL, United States.
Front Neurosci. 2022 Dec 1;16:906290. doi: 10.3389/fnins.2022.906290. eCollection 2022.
Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diagnosis of a wide range of diseases. However, the goal is often not only to accurately predict group membership or diagnose but also to provide explanations that support the model decision in a context that a human can readily interpret. The limited transparency has hindered the adoption of DNN algorithms across many domains. Numerous explainable artificial intelligence (XAI) techniques have been developed to peer inside the "black box" and make sense of DNN models, taking somewhat divergent approaches. Here, we suggest that these methods may be considered in light of the interpretation goal, including functional or mechanistic interpretations, developing archetypal class instances, or assessing the relevance of certain features or mappings on a trained model in a capacity. We then focus on reviewing recent applications of relevance techniques as applied to neuroimaging data. Moreover, this article suggests a method for comparing the reliability of XAI methods, especially in deep neural networks, along with their advantages and pitfalls.
深度神经网络(DNN)已经改变了计算机视觉领域,目前构成了一些用于学习人类大脑分层处理表示的最佳模型。在医学成像中,这些模型在多种疾病的早期诊断中表现出了人类水平甚至更高的性能。然而,目标通常不仅是准确预测群体归属或进行诊断,还在于提供能在人类易于理解的背景下支持模型决策的解释。透明度有限阻碍了DNN算法在许多领域的应用。已经开发了许多可解释人工智能(XAI)技术来窥视“黑匣子”并理解DNN模型,采取了有些不同的方法。在此,我们建议可以根据解释目标来考虑这些方法,包括功能或机制解释、开发原型类实例,或以某种能力评估训练模型上某些特征或映射的相关性。然后我们专注于回顾相关性技术在神经影像数据中的最新应用。此外,本文提出了一种比较XAI方法可靠性的方法,特别是在深度神经网络中,以及它们的优点和缺陷。