College of Information Science and Technology, Hangzhou Normal University, Hangzhou, China.
College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China.
Comput Biol Med. 2024 Mar;171:108112. doi: 10.1016/j.compbiomed.2024.108112. Epub 2024 Feb 15.
To prevent patients from being exposed to excess of radiation in CT imaging, the most common solution is to decrease the radiation dose by reducing the X-ray, and thus the quality of the resulting low-dose CT images (LDCT) is degraded, as evidenced by more noise and streaking artifacts. Therefore, it is important to maintain high quality CT image while effectively reducing radiation dose. In recent years, with the rapid development of deep learning technology, deep learning-based LDCT denoising methods have become quite popular because of their data-driven and high-performance features to achieve excellent denoising results. However, to our knowledge, no relevant article has so far comprehensively introduced and reviewed advanced deep learning denoising methods such as Transformer structures in LDCT denoising tasks. Therefore, based on the literatures related to LDCT image denoising published from year 2016-2023, and in particular from 2020 to 2023, this study presents a systematic survey of current situation, and challenges and future research directions in LDCT image denoising field. Four types of denoising networks are classified according to the network structure: CNN-based, Encoder-Decoder-based, GAN-based, and Transformer-based denoising networks, and each type of denoising network is described and summarized from the perspectives of structural features and denoising performances. Representative deep-learning denoising methods for LDCT are experimentally compared and analyzed. The study results show that CNN-based denoising methods capture image details efficiently through multi-level convolution operation, demonstrating superior denoising effects and adaptivity. Encoder-decoder networks with MSE loss, achieve outstanding results in objective metrics. GANs based methods, employing innovative generators and discriminators, obtain denoised images that exhibit perceptually a closeness to NDCT. Transformer-based methods have potential for improving denoising performances due to their powerful capability in capturing global information. Challenges and opportunities for deep learning based LDCT denoising are analyzed, and future directions are also presented.
为了防止患者在 CT 成像中受到过量辐射,最常见的解决方案是通过降低 X 射线剂量来降低辐射剂量,从而导致低剂量 CT 图像(LDCT)的质量下降,这表现在更多的噪声和条纹伪影上。因此,在有效降低辐射剂量的同时,保持高质量的 CT 图像非常重要。近年来,随着深度学习技术的快速发展,基于深度学习的 LDCT 去噪方法因其数据驱动和高性能的特点而变得非常流行,能够实现出色的去噪效果。然而,据我们所知,迄今为止,还没有任何相关文章全面介绍和回顾 LDCT 去噪任务中的先进深度学习去噪方法,如 Transformer 结构。因此,本研究基于 2016 年至 2023 年发表的有关 LDCT 图像去噪的文献,特别是 2020 年至 2023 年的文献,对 LDCT 图像去噪领域的现状、挑战和未来研究方向进行了系统的调查。根据网络结构,将去噪网络分为四类:基于 CNN 的、基于编解码器的、基于 GAN 的和基于 Transformer 的去噪网络,从结构特征和去噪性能两个方面对每一类去噪网络进行描述和总结。对代表性的 LDCT 深度学习去噪方法进行了实验比较和分析。研究结果表明,基于 CNN 的去噪方法通过多层次卷积运算有效地捕捉图像细节,具有优越的去噪效果和适应性。基于 MSE 损失的编解码器网络在客观指标上取得了优异的成绩。基于 GAN 的方法采用创新的生成器和判别器,获得的去噪图像在感知上更接近 NDCT。由于其在捕捉全局信息方面的强大能力,基于 Transformer 的方法具有提高去噪性能的潜力。对基于深度学习的 LDCT 去噪的挑战和机遇进行了分析,并提出了未来的发展方向。