Esmaeili Morteza, Vettukattil Riyas, Banitalebi Hasan, Krogh Nina R, Geitung Jonn Terje
Department of Diagnostic Imaging, Akershus University Hospital, 1478 Lørenskog, Norway.
Department of Electrical Engineering and Computer Science, Faculty of Science and Technology, University of Stavanger, 4021 Stavanger, Norway.
J Pers Med. 2021 Nov 16;11(11):1213. doi: 10.3390/jpm11111213.
Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy ( = 0.46, = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human-machine interactions and assist in the selection of optimal training methods.
成人大脑的原发性恶性肿瘤是全球性的致命疾病。计算机视觉,尤其是人工智能(AI)的最新发展,为自动表征和诊断脑肿瘤病变创造了机会。人工智能方法在不同的图像分析任务中提供了前所未有的准确性,包括区分含肿瘤的大脑和健康大脑。然而,人工智能模型的运行方式如同一个黑箱,隐藏了合理的解释,而这是将人工智能成像工具转化为临床常规操作的关键步骤。可解释人工智能方法旨在可视化训练模型的高级特征或融入训练过程。本研究旨在评估所选深度学习算法在磁共振成像对比中定位肿瘤病变以及区分病变与健康区域的性能。尽管分类与病变定位准确性之间存在显著相关性(= 0.46,= 0.005),但本研究中所考察的已知人工智能算法基于其他不相关特征对一些肿瘤大脑进行分类。结果表明,可解释人工智能方法可以培养对模型可解释性的直观认识,并且可能在深度学习模型的性能评估中发挥重要作用。开发可解释人工智能方法将是改善人机交互并协助选择最佳训练方法的重要工具。