Kulathilake K A Saneera Hemantha, Abdullah Nor Aniza, Sabri Aznul Qalid Md, Lai Khin Wee
Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia.
Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia.
Complex Intell Systems. 2023;9(3):2713-2745. doi: 10.1007/s40747-021-00405-x. Epub 2021 May 30.
Computed Tomography (CT) is a widely use medical image modality in clinical medicine, because it produces excellent visualizations of fine structural details of the human body. In clinical procedures, it is desirable to acquire CT scans by minimizing the X-ray flux to prevent patients from being exposed to high radiation. However, these Low-Dose CT (LDCT) scanning protocols compromise the signal-to-noise ratio of the CT images because of noise and artifacts over the image space. Thus, various restoration methods have been published over the past 3 decades to produce high-quality CT images from these LDCT images. More recently, as opposed to conventional LDCT restoration methods, Deep Learning (DL)-based LDCT restoration approaches have been rather common due to their characteristics of being data-driven, high-performance, and fast execution. Thus, this study aims to elaborate on the role of DL techniques in LDCT restoration and critically review the applications of DL-based approaches for LDCT restoration. To achieve this aim, different aspects of DL-based LDCT restoration applications were analyzed. These include DL architectures, performance gains, functional requirements, and the diversity of objective functions. The outcome of the study highlights the existing limitations and future directions for DL-based LDCT restoration. To the best of our knowledge, there have been no previous reviews, which specifically address this topic.
计算机断层扫描(CT)是临床医学中广泛使用的医学成像模态,因为它能出色地呈现人体精细结构细节的可视化图像。在临床操作中,希望通过最小化X射线通量来获取CT扫描,以防止患者受到高剂量辐射。然而,这些低剂量CT(LDCT)扫描协议由于图像空间中的噪声和伪影而损害了CT图像的信噪比。因此,在过去三十年中已经发表了各种恢复方法,以从这些LDCT图像生成高质量的CT图像。最近,与传统的LDCT恢复方法不同,基于深度学习(DL)的LDCT恢复方法因其数据驱动、高性能和执行速度快的特点而相当普遍。因此,本研究旨在阐述DL技术在LDCT恢复中的作用,并批判性地综述基于DL的方法在LDCT恢复中的应用。为实现这一目标,分析了基于DL的LDCT恢复应用的不同方面。这些方面包括DL架构、性能提升、功能要求以及目标函数的多样性。该研究的结果突出了基于DL的LDCT恢复的现有局限性和未来方向。据我们所知,以前没有专门针对这个主题的综述。