Reddy A Siva Krishna, Rao K N Brahmaji, Soora Narasimha Reddy, Shailaja Kotte, Kumar N C Santosh, Sridharan Abel, Uthayakumar J
School of CS and AI, Department of CS and AI, SR University, Warangal, Telangana India.
Raghu Institute of Technology, Vishakhapatnam, Andhra Pradesh India.
Multimed Tools Appl. 2023;82(8):12653-12677. doi: 10.1007/s11042-022-13739-6. Epub 2022 Sep 16.
COVID-19 pandemic has a significant impact on the global health and daily lives of people living over the globe. Several initial tests are based on the detecting of the genetic material of the coronavirus, and they have a minimum detection rate with a time-consuming process. To overcome this issue, radiological images are recommended where chest X-rays (CXRs) are employed in the diagnostic process. This article introduces a new Multi-modal fusion of deep transfer learning (MMF-DTL) technique to classify COVID-19. The proposed MMF-DTL model involves three main processes, namely pre-processing, feature extraction, and classification. The MMF-DTL model uses three DL models namely VGG16, Inception v3, and ResNet 50 for feature extraction. Since a single modality would not be adequate to attain an effective detection rate, the integration of three approaches by the use of decision-based multimodal fusion increases the detection rate. So, a fusion of three DL models takes place to further improve the detection rate. Finally, a softmax classifier is employed for test images to a set of six different. A wide range of experimental result analyses is carried out on the Chest-X-Ray dataset. The proposed fusion model is found to be an effective tool for COVID-19 diagnosis using radiological images with the average of 92.96%, of 98.54%, of 93.60%, of 98.80%, of 93.26% and kappa of 91.86%.
新冠疫情对全球人们的健康和日常生活产生了重大影响。最初的一些检测基于对冠状病毒遗传物质的检测,且检测率较低,过程耗时。为克服这一问题,推荐使用放射图像,其中胸部X光(CXR)被用于诊断过程。本文介绍一种用于新冠分类的新型深度迁移学习多模态融合(MMF-DTL)技术。所提出的MMF-DTL模型涉及三个主要过程,即预处理、特征提取和分类。MMF-DTL模型使用VGG16、Inception v3和ResNet 50这三种深度学习模型进行特征提取。由于单一模态不足以获得有效的检测率,通过基于决策的多模态融合整合三种方法可提高检测率。因此,对三种深度学习模型进行融合以进一步提高检测率。最后,使用softmax分类器对一组六个不同的测试图像进行分类。在胸部X光数据集上进行了广泛的实验结果分析。结果发现,所提出的融合模型是一种使用放射图像诊断新冠的有效工具,平均准确率为92.96%,召回率为98.54%,精确率为93.60%,F1值为98.80%,特异性为93.26%,kappa值为91.86%。