Abdar Moloud, Salari Soorena, Qahremani Sina, Lam Hak-Keung, Karray Fakhri, Hussain Sadiq, Khosravi Abbas, Acharya U Rajendra, Makarenkov Vladimir, Nahavandi Saeid
Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, Australia.
Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada.
Inf Fusion. 2023 Feb;90:364-381. doi: 10.1016/j.inffus.2022.09.023. Epub 2022 Oct 5.
The COVID-19 (Coronavirus disease 2019) pandemic has become a major global threat to human health and well-being. Thus, the development of computer-aided detection (CAD) systems that are capable of accurately distinguishing COVID-19 from other diseases using chest computed tomography (CT) and X-ray data is of immediate priority. Such automatic systems are usually based on traditional machine learning or deep learning methods. Differently from most of the existing studies, which used either CT scan or X-ray images in COVID-19-case classification, we present a new, simple but efficient deep learning feature fusion model, called , which is able to classify accurately large datasets of both of these types of images. We argue that the uncertainty of the model's predictions should be taken into account in the learning process, even though most of the existing studies have overlooked it. We quantify the prediction uncertainty in our feature fusion model using effective Ensemble Monte Carlo Dropout (EMCD) technique. A comprehensive simulation study has been conducted to compare the results of our new model to the existing approaches, evaluating the performance of competing models in terms of Precision, Recall, F-Measure, Accuracy and ROC curves. The obtained results prove the efficiency of our model which provided the prediction accuracy of 99.08% and 96.35% for the considered CT scan and X-ray datasets, respectively. Moreover, our model was generally robust to noise and performed well with previously unseen data. The source code of our implementation is freely available at: https://github.com/moloud1987/UncertaintyFuseNet-for-COVID-19-Classification.
2019冠状病毒病(COVID-19)大流行已成为对人类健康和福祉的重大全球威胁。因此,开发能够利用胸部计算机断层扫描(CT)和X射线数据准确区分COVID-19与其他疾病的计算机辅助检测(CAD)系统迫在眉睫。这种自动系统通常基于传统机器学习或深度学习方法。与大多数现有研究不同,现有研究在COVID-19病例分类中要么使用CT扫描要么使用X射线图像,我们提出了一种新的、简单但高效的深度学习特征融合模型,称为 ,它能够对这两种类型图像的大型数据集进行准确分类。我们认为,即使大多数现有研究都忽略了这一点,但在学习过程中仍应考虑模型预测的不确定性。我们使用有效的集成蒙特卡洛随机失活(EMCD)技术对特征融合模型中的预测不确定性进行量化。已经进行了一项全面的模拟研究,将我们新模型的结果与现有方法进行比较,从精确率、召回率、F1值、准确率和ROC曲线等方面评估竞争模型的性能。所得结果证明了我们模型的有效性,对于所考虑的CT扫描和X射线数据集,其预测准确率分别为99.08%和96.35%。此外,我们的 模型通常对噪声具有鲁棒性,并且在处理以前未见过的数据时表现良好。我们实现的源代码可在以下网址免费获取:https://github.com/moloud1987/UncertaintyFuseNet-for-COVID-19-Classification。