Storås Andrea M, Dreyer Maximilian, Pahde Frederik, Lapuschkin Sebastian, Samek Wojciech, Halvorsen Pål, de Lange Thomas, Mori Yuichi, Hann Alexander, Berzin Tyler M, Parasa Sravanthi, Riegler Michael A
Department of Holistic Systems, SimulaMet, Oslo, Norway.
Department of Computer Science, OsloMet - Oslo Metropolitan University, Oslo, Norway.
Sci Rep. 2025 Aug 7;15(1):28860. doi: 10.1038/s41598-025-14408-y.
Complex artificial intelligence models, like deep neural networks, have shown exceptional capabilities to detect early-stage polyps and tumors in the gastrointestinal tract. These technologies are already beginning to assist gastroenterologists in the endoscopy suite. To understand how these complex models work and their limitations, model explanations can be useful. Moreover, medical doctors specialized in gastroenterology can provide valuable feedback on the model explanations. This study explores three different explainable artificial intelligence methods for explaining a deep neural network detecting gastrointestinal abnormalities. The model explanations are presented to gastroenterologists. Furthermore, the clinical applicability of the explanation methods from the healthcare personnel's perspective is discussed. Our findings indicate that the explanation methods are not meeting the requirements for clinical use, but that they can provide valuable information to researchers and model developers. Higher quality datasets and careful considerations regarding how the explanations are presented might lead to solutions that are more welcome in the clinic.
复杂的人工智能模型,如深度神经网络,已展现出在检测胃肠道早期息肉和肿瘤方面的卓越能力。这些技术已开始在内镜检查室协助胃肠病学家。为了解这些复杂模型的工作原理及其局限性,模型解释可能会有所帮助。此外,专门从事胃肠病学的医生可以对模型解释提供有价值的反馈。本研究探索了三种不同的可解释人工智能方法,用于解释检测胃肠道异常的深度神经网络。将模型解释呈现给胃肠病学家。此外,还从医护人员的角度讨论了解释方法的临床适用性。我们的研究结果表明,这些解释方法不符合临床使用要求,但它们可以为研究人员和模型开发者提供有价值的信息。更高质量的数据集以及对解释呈现方式的仔细考虑可能会带来在临床上更受欢迎的解决方案。