School of the Integrated Technology, Yonsei University, Songdogwahak-ro 85, Yeonsu-gu, Incheon, South Korea.
School of the Integrated Technology, Yonsei University, Songdogwahak-ro 85, Yeonsu-gu, Incheon, South Korea.
Med Image Anal. 2021 Jan;67:101883. doi: 10.1016/j.media.2020.101883. Epub 2020 Oct 27.
Motion artifacts are a major factor that can degrade the diagnostic performance of computed tomography (CT) images. In particular, the motion artifacts become considerably more severe when an imaging system requires a long scan time such as in dental CT or cone-beam CT (CBCT) applications, where patients generate rigid and non-rigid motions. To address this problem, we proposed a new real-time technique for motion artifacts reduction that utilizes a deep residual network with an attention module. Our attention module was designed to increase the model capacity by amplifying or attenuating the residual features according to their importance. We trained and evaluated the network by creating four benchmark datasets with rigid motions or with both rigid and non-rigid motions under a step-and-shoot fan-beam CT (FBCT) or a CBCT. Each dataset provided a set of motion-corrupted CT images and their ground-truth CT image pairs. The strong modeling power of the proposed network model allowed us to successfully handle motion artifacts from the two CT systems under various motion scenarios in real-time. As a result, the proposed model demonstrated clear performance benefits. In addition, we compared our model with Wasserstein generative adversarial network (WGAN)-based models and a deep residual network (DRN)-based model, which are one of the most powerful techniques for CT denoising and natural RGB image deblurring, respectively. Based on the extensive analysis and comparisons using four benchmark datasets, we confirmed that our model outperformed the aforementioned competitors. Our benchmark datasets and implementation code are available at https://github.com/youngjun-ko/ct_mar_attention.
运动伪影是降低计算机断层扫描(CT)图像诊断性能的一个主要因素。特别是在成像系统需要较长扫描时间的情况下,例如在牙科 CT 或锥形束 CT(CBCT)应用中,患者会产生刚性和非刚性运动,运动伪影会变得更加严重。为了解决这个问题,我们提出了一种新的实时运动伪影减少技术,该技术利用具有注意力模块的深度残差网络。我们的注意力模块旨在通过根据重要性放大或衰减残差特征来增加模型容量。我们通过创建四个基准数据集来训练和评估网络,这些数据集包含刚性运动或在步进和扇形束 CT(FBCT)或 CBCT 下具有刚性和非刚性运动的数据集。每个数据集都提供了一组运动伪影 CT 图像及其真实 CT 图像对。所提出的网络模型的强大建模能力使我们能够实时成功处理来自两种 CT 系统的运动伪影,适用于各种运动场景。结果,所提出的模型表现出明显的性能优势。此外,我们将我们的模型与基于 Wasserstein 生成对抗网络(WGAN)的模型和基于深度残差网络(DRN)的模型进行了比较,这两种模型分别是 CT 去噪和自然 RGB 图像去模糊的最强大技术之一。基于使用四个基准数据集的广泛分析和比较,我们确认我们的模型优于上述竞争对手。我们的基准数据集和实现代码可在 https://github.com/youngjun-ko/ct_mar_attention 上获得。