Suppr超能文献

基于特征融合和深度学习的胸部 X 射线图像 COVID-19 检测。

COVID-19 Detection from Chest X-ray Images Using Feature Fusion and Deep Learning.

机构信息

Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail 1902, Bangladesh.

Department of Engineering, Manchester Metropolitan University, Chester St, Manchester M1 5GD, UK.

出版信息

Sensors (Basel). 2021 Feb 20;21(4):1480. doi: 10.3390/s21041480.

Abstract

Currently, COVID-19 is considered to be the most dangerous and deadly disease for the human body caused by the novel coronavirus. In December 2019, the coronavirus spread rapidly around the world, thought to be originated from Wuhan in China and is responsible for a large number of deaths. Earlier detection of the COVID-19 through accurate diagnosis, particularly for the cases with no obvious symptoms, may decrease the patient's death rate. Chest X-ray images are primarily used for the diagnosis of this disease. This research has proposed a machine vision approach to detect COVID-19 from the chest X-ray images. The features extracted by the histogram-oriented gradient (HOG) and convolutional neural network (CNN) from X-ray images were fused to develop the classification model through training by CNN (VGGNet). Modified anisotropic diffusion filtering (MADF) technique was employed for better edge preservation and reduced noise from the images. A watershed segmentation algorithm was used in order to mark the significant fracture region in the input X-ray images. The testing stage considered generalized data for performance evaluation of the model. Cross-validation analysis revealed that a 5-fold strategy could successfully impair the overfitting problem. This proposed feature fusion using the deep learning technique assured a satisfactory performance in terms of identifying COVID-19 compared to the immediate, relevant works with a testing accuracy of 99.49%, specificity of 95.7% and sensitivity of 93.65%. When compared to other classification techniques, such as ANN, KNN, and SVM, the CNN technique used in this study showed better classification performance. K-fold cross-validation demonstrated that the proposed feature fusion technique (98.36%) provided higher accuracy than the individual feature extraction methods, such as HOG (87.34%) or CNN (93.64%).

摘要

目前,新型冠状病毒引起的新冠肺炎被认为是对人体最危险和致命的疾病。2019 年 12 月,冠状病毒在全球迅速传播,据认为源自中国武汉,导致大量死亡。通过准确诊断,特别是对无明显症状的病例进行早期检测,可能会降低患者的死亡率。胸部 X 光图像主要用于该病的诊断。本研究提出了一种基于机器视觉的方法,通过卷积神经网络(CNN)从胸部 X 光图像中检测新冠肺炎。通过 CNN(VGGNet)训练,融合直方图方向梯度(HOG)和卷积神经网络(CNN)从 X 光图像中提取的特征,开发分类模型。采用改进的各向异性扩散滤波(MADF)技术对图像进行滤波,以更好地保留边缘并减少噪声。采用分水岭分割算法对输入 X 光图像中的显著骨折区域进行标记。测试阶段考虑了广义数据,以评估模型的性能。交叉验证分析表明,5 折策略可以成功地避免过拟合问题。与直接相关的现有工作相比,该研究提出的基于深度学习的特征融合方法在识别新冠肺炎方面表现出了令人满意的性能,测试准确率为 99.49%,特异性为 95.7%,敏感性为 93.65%。与其他分类技术(如 ANN、KNN 和 SVM)相比,本研究中使用的 CNN 技术表现出更好的分类性能。K 折交叉验证表明,所提出的特征融合技术(98.36%)比单独的特征提取方法(如 HOG(87.34%)或 CNN(93.64%))提供更高的准确性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2b5/8078171/260ec5177831/sensors-21-01480-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验