Ruhr-University Bochum, Center for Protein Diagnostics, Bochum, 44801, Germany; Ruhr-University Bochum, Faculty of Biology and Biotechnology, Bioinformatics Group, 44801 Bochum, Germany.
Ruhr-University Bochum, Center for Protein Diagnostics, Bochum, 44801, Germany; Ruhr-University Bochum, Faculty of Biology and Biotechnology, Department of Biophysics, 44801 Bochum, Germany.
Med Image Anal. 2022 Nov;82:102594. doi: 10.1016/j.media.2022.102594. Epub 2022 Aug 24.
In recent years, deep learning has been the key driver of breakthrough developments in computational pathology and other image based approaches that support medical diagnosis and treatment. The underlying neural networks as inherent black boxes lack transparency and are often accompanied by approaches to explain their output. However, formally defining explainability has been a notorious unsolved riddle. Here, we introduce a hypothesis-based framework for falsifiable explanations of machine learning models. A falsifiable explanation is a hypothesis that connects an intermediate space induced by the model with the sample from which the data originate. We instantiate this framework in a computational pathology setting using hyperspectral infrared microscopy. The intermediate space is an activation map, which is trained with an inductive bias to localize tumor. An explanation is constituted by hypothesizing that activation corresponds to tumor and associated structures, which we validate by histological staining as an independent secondary experiment.
近年来,深度学习一直是计算病理学和其他基于图像的方法取得突破性进展的关键驱动力,这些方法支持医疗诊断和治疗。作为固有黑盒的基础神经网络缺乏透明度,并且通常伴随着解释其输出的方法。然而,正式定义可解释性一直是一个臭名昭著的未解决的难题。在这里,我们引入了一个基于假设的机器学习模型可验证解释框架。可验证的解释是一个假设,它将模型诱导的中间空间与数据来源的样本联系起来。我们在计算病理学环境中使用高光谱近红外显微镜实例化了这个框架。中间空间是一个激活图,它通过归纳偏差进行训练以定位肿瘤。解释是通过假设激活对应于肿瘤及其相关结构来构成的,我们通过组织学染色作为独立的二次实验来验证这一假设。