School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA.
School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA.
Int J Med Inform. 2020 Dec;144:104284. doi: 10.1016/j.ijmedinf.2020.104284. Epub 2020 Sep 23.
This study aims to develop and test a new computer-aided diagnosis (CAD) scheme of chest X-ray images to detect coronavirus (COVID-19) infected pneumonia.
CAD scheme first applies two image preprocessing steps to remove the majority of diaphragm regions, process the original image using a histogram equalization algorithm, and a bilateral low-pass filter. Then, the original image and two filtered images are used to form a pseudo color image. This image is fed into three input channels of a transfer learning-based convolutional neural network (CNN) model to classify chest X-ray images into 3 classes of COVID-19 infected pneumonia, other community-acquired no-COVID-19 infected pneumonia, and normal (non-pneumonia) cases. To build and test the CNN model, a publicly available dataset involving 8474 chest X-ray images is used, which includes 415, 5179 and 2,880 cases in three classes, respectively. Dataset is randomly divided into 3 subsets namely, training, validation, and testing with respect to the same frequency of cases in each class to train and test the CNN model.
The CNN-based CAD scheme yields an overall accuracy of 94.5 % (2404/2544) with a 95 % confidence interval of [0.93,0.96] in classifying 3 classes. CAD also yields 98.4 % sensitivity (124/126) and 98.0 % specificity (2371/2418) in classifying cases with and without COVID-19 infection. However, without using two preprocessing steps, CAD yields a lower classification accuracy of 88.0 % (2239/2544).
This study demonstrates that adding two image preprocessing steps and generating a pseudo color image plays an important role in developing a deep learning CAD scheme of chest X-ray images to improve accuracy in detecting COVID-19 infected pneumonia.
本研究旨在开发和测试一种新的计算机辅助诊断(CAD)方案,以检测胸部 X 光图像中的冠状病毒(COVID-19)感染性肺炎。
CAD 方案首先应用两个图像预处理步骤去除大部分横膈膜区域,使用直方图均衡化算法和双边低通滤波器处理原始图像。然后,将原始图像和两个滤波图像组合成伪彩色图像。将该图像输入基于迁移学习的卷积神经网络(CNN)模型的三个输入通道,将胸部 X 光图像分为 COVID-19 感染性肺炎、其他社区获得性非 COVID-19 感染性肺炎和正常(非肺炎)三种类型。为了构建和测试 CNN 模型,使用了一个公开的数据集,该数据集包含 8474 张胸部 X 光图像,分别有 415、5179 和 2880 例属于三个类别。数据集按照每个类别的病例频率随机分为训练、验证和测试三个子集,用于训练和测试 CNN 模型。
基于 CNN 的 CAD 方案在对三个类别的分类中,整体准确率为 94.5%(2404/2544),95%置信区间为[0.93,0.96]。CAD 方案在分类有无 COVID-19 感染的病例时,灵敏度为 98.4%(124/126),特异性为 98.0%(2371/2418)。但是,如果不使用两个预处理步骤,CAD 的分类准确率则会降低至 88.0%(2239/2544)。
本研究表明,添加两个图像预处理步骤并生成伪彩色图像在开发胸部 X 光图像深度学习 CAD 方案以提高 COVID-19 感染性肺炎检测准确性方面发挥了重要作用。