Suppr超能文献

利用胸部X光图像对新冠肺炎进行深度分类的可解释性技术探索

Exploration of Interpretability Techniques for Deep COVID-19 Classification Using Chest X-ray Images.

作者信息

Chatterjee Soumick, Saad Fatima, Sarasaen Chompunuch, Ghosh Suhita, Krug Valerie, Khatun Rupali, Mishra Rahul, Desai Nirja, Radeva Petia, Rose Georg, Stober Sebastian, Speck Oliver, Nürnberger Andreas

机构信息

Data and Knowledge Engineering Group, Otto von Guericke University, 39106 Magdeburg, Germany.

Faculty of Computer Science, Otto von Guericke University, 39106 Magdeburg, Germany.

出版信息

J Imaging. 2024 Feb 8;10(2):45. doi: 10.3390/jimaging10020045.

Abstract

The outbreak of COVID-19 has shocked the entire world with its fairly rapid spread, and has challenged different sectors. One of the most effective ways to limit its spread is the early and accurate diagnosing of infected patients. Medical imaging, such as X-ray and computed tomography (CT), combined with the potential of artificial intelligence (AI), plays an essential role in supporting medical personnel in the diagnosis process. Thus, in this article, five different deep learning models (ResNet18, ResNet34, InceptionV3, InceptionResNetV2, and DenseNet161) and their ensemble, using majority voting, have been used to classify COVID-19, pneumoniæ and healthy subjects using chest X-ray images. Multilabel classification was performed to predict multiple pathologies for each patient, if present. Firstly, the interpretability of each of the networks was thoroughly studied using local interpretability methods-occlusion, saliency, input X gradient, guided backpropagation, integrated gradients, and DeepLIFT-and using a global technique-neuron activation profiles. The mean micro F1 score of the models for COVID-19 classifications ranged from 0.66 to 0.875, and was 0.89 for the ensemble of the network models. The qualitative results showed that the ResNets were the most interpretable models. This research demonstrates the importance of using interpretability methods to compare different models before making a decision regarding the best performing model.

摘要

新冠疫情(COVID-19)的爆发因其迅速传播震惊了全世界,并给不同领域带来了挑战。限制其传播的最有效方法之一是尽早准确诊断感染患者。医学成像,如X射线和计算机断层扫描(CT),结合人工智能(AI)的潜力,在支持医务人员进行诊断过程中发挥着至关重要的作用。因此,在本文中,使用了五种不同的深度学习模型(ResNet18、ResNet34、InceptionV3、InceptionResNetV2和DenseNet161)及其通过多数投票的集成模型,利用胸部X光图像对新冠患者、肺炎患者和健康受试者进行分类。如果存在多种病变,则进行多标签分类以预测每位患者的多种病变情况。首先,使用局部可解释性方法——遮挡法、显著性法、输入X梯度法、引导反向传播法、集成梯度法和深度提升法,并使用全局技术——神经元激活剖面,对每个网络的可解释性进行了深入研究。这些模型对新冠分类的平均微F1分数在0.66至0.875之间,网络模型集成后的分数为0.89。定性结果表明,ResNet模型是最具可解释性的模型。本研究证明了在决定最佳性能模型之前,使用可解释性方法比较不同模型的重要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ee5f/10889835/3e31d64d0e03/jimaging-10-00045-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验