Department of IT Convergence Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Republic of Korea.
Sensors (Basel). 2023 Mar 16;23(6):3176. doi: 10.3390/s23063176.
Deep learning has achieved remarkably positive results and impacts on medical diagnostics in recent years. Due to its use in several proposals, deep learning has reached sufficient accuracy to implement; however, the algorithms are black boxes that are hard to understand, and model decisions are often made without reason or explanation. To reduce this gap, explainable artificial intelligence (XAI) offers a huge opportunity to receive informed decision support from deep learning models and opens the black box of the method. We conducted an explainable deep learning method based on ResNet152 combined with Grad-CAM for endoscopy image classification. We used an open-source KVASIR dataset that consisted of a total of 8000 wireless capsule images. The heat map of the classification results and an efficient augmentation method achieved a high positive result with 98.28% training and 93.46% validation accuracy in terms of medical image classification.
深度学习近年来在医学诊断方面取得了显著的积极成果和影响。由于在多个提案中的使用,深度学习已经达到了足够的准确性,可以实施;然而,这些算法是难以理解的“黑箱”,并且模型决策往往没有理由或解释。为了缩小这一差距,可解释人工智能(XAI)为从深度学习模型获得明智的决策支持提供了巨大的机会,并打开了该方法的“黑箱”。我们结合 Grad-CAM 提出了一种基于 ResNet152 的可解释深度学习方法,用于内窥镜图像分类。我们使用了一个开源的 KVASIR 数据集,其中包含总共 8000 个无线胶囊图像。分类结果的热图和有效的增强方法在医学图像分类方面取得了 98.28%的训练和 93.46%的验证准确率的高阳性结果。