Peng Junbo, Qiu Richard L J, Wynne Jacob F, Chang Chih-Wei, Pan Shaoyan, Wang Tonghe, Roper Justin, Liu Tian, Patel Pretesh R, Yu David S, Yang Xiaofeng
Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA.
Nuclear and Radiological Engineering and Medical physics Programs, George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA.
Med Phys. 2024 Mar;51(3):1847-1859. doi: 10.1002/mp.16704. Epub 2023 Aug 30.
Daily or weekly cone-beam computed tomography (CBCT) scans are commonly used for accurate patient positioning during the image-guided radiotherapy (IGRT) process, making it an ideal option for adaptive radiotherapy (ART) replanning. However, the presence of severe artifacts and inaccurate Hounsfield unit (HU) values prevent its use for quantitative applications such as organ segmentation and dose calculation. To enable the clinical practice of online ART, it is crucial to obtain CBCT scans with a quality comparable to that of a CT scan.
This work aims to develop a conditional diffusion model to perform image translation from the CBCT to the CT distribution for the image quality improvement of CBCT.
The proposed method is a conditional denoising diffusion probabilistic model (DDPM) that utilizes a time-embedded U-net architecture with residual and attention blocks to gradually transform the white Gaussian noise sample to the target CT distribution conditioned on the CBCT. The model was trained on deformed planning CT (dpCT) and CBCT image pairs, and its feasibility was verified in brain patient study and head-and-neck (H&N) patient study. The performance of the proposed algorithm was evaluated using mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and normalized cross-correlation (NCC) metrics on generated synthetic CT (sCT) samples. The proposed method was also compared to four other diffusion model-based sCT generation methods.
In the brain patient study, the MAE, PSNR, and NCC of the generated sCT were 25.99 HU, 30.49 dB, and 0.99, respectively, compared to 40.63 HU, 27.87 dB, and 0.98 of the CBCT images. In the H&N patient study, the metrics were 32.56 HU, 27.65 dB, 0.98 and 38.99 HU, 27.00, 0.98 for sCT and CBCT, respectively. Compared to the other four diffusion models and one Cycle generative adversarial network (Cycle GAN), the proposed method showed superior results in both visual quality and quantitative analysis.
The proposed conditional DDPM method can generate sCT from CBCT with accurate HU numbers and reduced artifacts, enabling accurate CBCT-based organ segmentation and dose calculation for online ART.
在图像引导放射治疗(IGRT)过程中,每日或每周进行的锥束计算机断层扫描(CBCT)常用于精确的患者定位,使其成为自适应放射治疗(ART)重新计划的理想选择。然而,严重伪影的存在和不准确的亨氏单位(HU)值阻碍了其在诸如器官分割和剂量计算等定量应用中的使用。为了实现在线ART的临床实践,获得质量与CT扫描相当的CBCT扫描至关重要。
本研究旨在开发一种条件扩散模型,用于从CBCT到CT分布的图像转换,以提高CBCT的图像质量。
所提出的方法是一种条件去噪扩散概率模型(DDPM),它利用具有残差块和注意力块的时间嵌入U-net架构,将白色高斯噪声样本逐步转换为以CBCT为条件的目标CT分布。该模型在变形计划CT(dpCT)和CBCT图像对上进行训练,并在脑部患者研究和头颈(H&N)患者研究中验证了其可行性。使用平均绝对误差(MAE)、峰值信噪比(PSNR)和归一化互相关(NCC)指标对生成的合成CT(sCT)样本评估所提出算法的性能。所提出的方法还与其他四种基于扩散模型的sCT生成方法进行了比较。
在脑部患者研究中,生成的sCT的MAE、PSNR和NCC分别为25.99 HU、30.49 dB和0.99,而CBCT图像的相应值分别为40.63 HU、27.87 dB和0.98。在H&N患者研究中,sCT和CBCT的这些指标分别为32.56 HU、27.65 dB、0.98和38.99 HU、27.00、0.98。与其他四种扩散模型和一个循环生成对抗网络(Cycle GAN)相比,所提出的方法在视觉质量和定量分析方面均显示出更好的结果。
所提出的条件DDPM方法可以从CBCT生成具有准确HU值且伪影减少的sCT,从而实现基于CBCT的准确器官分割和在线ART的剂量计算。