School of Integrated Technology, Yonsei University, Incheon, South Korea.
Med Phys. 2022 Dec;49(12):7497-7515. doi: 10.1002/mp.15885. Epub 2022 Aug 8.
Sparse-view computed tomography (CT) has been attracting attention for its reduced radiation dose and scanning time. However, analytical image reconstruction methods suffer from streak artifacts due to insufficient projection views. Recently, various deep learning-based methods have been developed to solve this ill-posed inverse problem. Despite their promising results, they are easily overfitted to the training data, showing limited generalizability to unseen systems and patients. In this work, we propose a novel streak artifact reduction algorithm that provides a system- and patient-specific solution.
Motivated by the fact that streak artifacts are deterministic errors, we regenerate the same artifacts from a prior CT image under the same system geometry. This prior image need not be perfect but should contain patient-specific information and be consistent with full-view projection data for accurate regeneration of the artifacts. To this end, we use a coordinate-based neural representation that often causes image blur but can greatly suppress the streak artifacts while having multiview consistency. By employing techniques in neural radiance fields originally proposed for scene representations, the neural representation is optimized to the measured sparse-view projection data via self-supervised learning. Then, we subtract the regenerated artifacts from the analytically reconstructed original image to obtain the final corrected image.
To validate the proposed method, we used simulated data of extended cardiac-torso phantoms and the 2016 NIH-AAPM-Mayo Clinic Low-Dose CT Grand Challenge and experimental data of physical pediatric and head phantoms. The performance of the proposed method was compared with a total variation-based iterative reconstruction method, naive application of the neural representation, and a convolutional neural network-based method. In visual inspection, it was observed that the small anatomical features were best preserved by the proposed method. The proposed method also achieved the best scores in the visual information fidelity, modulation transfer function, and lung nodule segmentation.
The results on both simulated and experimental data suggest that the proposed method can effectively reduce the streak artifacts while preserving small anatomical structures that are easily blurred or replaced with misleading features by the existing methods. Since the proposed method does not require any additional training datasets, it would be useful in clinical practice where the large datasets cannot be collected.
稀疏视角计算机断层扫描(CT)因其辐射剂量和扫描时间减少而受到关注。然而,分析图像重建方法由于投影视图不足而存在条纹伪影。最近,已经开发了各种基于深度学习的方法来解决这个病态逆问题。尽管它们的结果很有前景,但它们很容易过度拟合训练数据,对未见的系统和患者表现出有限的泛化能力。在这项工作中,我们提出了一种新颖的条纹伪影减少算法,该算法提供了一种系统和患者特定的解决方案。
受条纹伪影是确定性误差的事实的启发,我们从相同系统几何形状下的先前 CT 图像中重新生成相同的伪影。该先验图像不必是完美的,但应该包含患者特定的信息,并且与全视角投影数据一致,以便准确地重新生成伪影。为此,我们使用基于坐标的神经表示,该表示通常会导致图像模糊,但可以在具有多视图一致性的同时极大地抑制条纹伪影。通过使用最初为场景表示提出的神经辐射场中的技术,通过自监督学习对神经表示进行优化,以适应测量的稀疏视图投影数据。然后,我们从解析重建的原始图像中减去生成的伪影,以获得最终的校正图像。
为了验证所提出的方法,我们使用扩展心脏胸体模的模拟数据和 2016 年 NIH-AAPM-梅奥诊所低剂量 CT 大挑战以及物理儿科和头部体模的实验数据进行了验证。将所提出的方法与基于全变差的迭代重建方法、神经表示的简单应用以及基于卷积神经网络的方法进行了比较。在视觉检查中,观察到所提出的方法可以最好地保留小解剖特征。该方法在视觉信息保真度、调制传递函数和肺结节分割方面也取得了最佳分数。
模拟和实验数据的结果表明,该方法可以有效地减少条纹伪影,同时保留小解剖结构,而现有的方法容易使这些结构模糊或被误导特征取代。由于该方法不需要任何额外的训练数据集,因此在无法收集大量数据集的临床实践中很有用。