Rangarajan Aravind Krishnaswamy, Ramachandran Hari Krishnan
School of Mechanical Engineering, SASTRA Deemed to be University, Thanjavur 613 401, Tamil Nadu, India.
Expert Syst Appl. 2021 Nov 30;183:115401. doi: 10.1016/j.eswa.2021.115401. Epub 2021 Jun 12.
The COVID-19 outbreak has catastrophically affected both public health system and world economy. Swift diagnosis of the positive cases will help in providing proper medical attention to the infected individuals and will also aid in effective tracing of their contacts to break the chain of transmission. Blending Artificial Intelligence (AI) with chest X-ray images and incorporating these models in a smartphone can be handy for the accelerated diagnosis of COVID-19. In this study, publicly available datasets of chest X-ray images have been utilized for training and testing of five pre-trained Convolutional Neural Network (CNN) models namely VGG16, MobileNetV2, Xception, NASNetMobile and InceptionResNetV2. Prior to the training of the selected models, the number of images in COVID-19 category has been increased employing traditional augmentation and Generative Adversarial Network (GAN). The performance of the five pre-trained CNN models utilizing the images generated with the two strategies has been compared. In the case of models trained using augmented images, Xception (98%) and MobileNetV2 (97.9%) turned out to be the ones with highest validation accuracy. Xception (98.1%) and VGG16 (98.6%) emerged as models with the highest validation accuracy in the models trained with synthetic GAN images. The best performing models have been further deployed in a smartphone and evaluated. The overall results suggest that VGG16 and Xception, trained with the synthetic images created using GAN, performed better compared to models trained with augmented images. Among these two models VGG16 produced an encouraging Diagnostic Odd Ratio (DOR) with higher positive likelihood and lower negative likelihood for the prediction of COVID-19.
新冠疫情对公共卫生系统和世界经济都造成了灾难性影响。快速诊断出阳性病例将有助于为感染者提供适当的医疗护理,还将有助于有效追踪其接触者以切断传播链。将人工智能(AI)与胸部X光图像相结合,并将这些模型集成到智能手机中,有助于加速新冠病毒的诊断。在本研究中,公开可用的胸部X光图像数据集被用于训练和测试五个预训练的卷积神经网络(CNN)模型,即VGG16、MobileNetV2、Xception、NASNetMobile和InceptionResNetV2。在训练所选模型之前,采用传统增强方法和生成对抗网络(GAN)增加了新冠病毒类别中的图像数量。比较了使用这两种策略生成的图像对五个预训练CNN模型的性能。在使用增强图像训练的模型中,Xception(98%)和MobileNetV2(97.9%)的验证准确率最高。在使用合成GAN图像训练的模型中,Xception(98.1%)和VGG16(98.6%)的验证准确率最高。性能最佳的模型已进一步部署到智能手机中并进行评估。总体结果表明,与使用增强图像训练的模型相比,使用GAN创建的合成图像训练的VGG16和Xception表现更好。在这两个模型中,VGG16在预测新冠病毒时产生了令人鼓舞的诊断比值比(DOR),其阳性似然比更高,阴性似然比更低。