IEEE J Biomed Health Inform. 2022 Jan;26(1):194-205. doi: 10.1109/JBHI.2021.3132157. Epub 2022 Jan 17.
With the ongoing worldwide coronavirus disease 2019 (COVID-19) pandemic, it is desirable to develop effective algorithms to automatically detect COVID-19 with chest computed tomography (CT) images. Recently, a considerable number of methods based on deep learning have indeed been proposed. However, training an accurate deep learning model requires a large-scale chest CT dataset, which is hard to collect due to the high contagiousness of COVID-19. To achieve improved detection performance, this paper proposes a hybrid framework that fuses the complex shearlet scattering transform (CSST) and a suitable convolutional neural network into a single model. The introduced CSST cascades complex shearlet transforms with modulus nonlinearities and low-pass filter convolutions to compute a sparse and locally invariant image representation. The features computed from the input chest CT images are discriminative for COVID-19 detection. Furthermore, a wide residual network with a redesigned residual block (WR2N) is developed to learn more granular multiscale representations by applying it to scattering features. The combination of model-based CSST and data-driven WR2N leads to a more convenient neural network for image representation, where the idea is to learn only the image parts that the CSST cannot handle instead of all parts. Experiments on two public datasets demonstrate the superiority of our method. We can obtain more accurate results than several state-of-the-art COVID-19 classification methods in terms of measures such as accuracy, the F1-score, and the area under the receiver operating characteristic curve.
随着全球 2019 年冠状病毒病(COVID-19)大流行的持续,开发有效的算法来使用胸部计算机断层扫描(CT)图像自动检测 COVID-19 是可取的。最近,确实已经提出了相当数量的基于深度学习的方法。然而,训练准确的深度学习模型需要大规模的胸部 CT 数据集,由于 COVID-19 的高传染性,因此很难收集。为了提高检测性能,本文提出了一种混合框架,该框架将复杂剪切散射变换(CSST)和合适的卷积神经网络融合到单个模型中。引入的 CSST 级联复数剪切变换与模非线性和低通滤波器卷积,以计算稀疏且局部不变的图像表示。从输入胸部 CT 图像计算出的特征对于 COVID-19 检测具有区分性。此外,通过将其应用于散射特征,开发了具有重新设计的残差块(WR2N)的宽残差网络(WR2N),以通过应用它来学习更多粒度的多尺度表示。基于模型的 CSST 和数据驱动的 WR2N 的组合导致了更方便的图像表示神经网络,其思想是仅学习 CSST 无法处理的图像部分,而不是所有部分。在两个公共数据集上的实验证明了我们的方法的优越性。在准确性,F1 分数和接收器工作特征曲线下的面积等指标方面,我们可以获得比几种最先进的 COVID-19 分类方法更准确的结果。