Zhu Jiarui, Sun Hongfei, Chen Weixing, Zhi Shaohua, Liu Chenyang, Zhao Mayang, Zhang Yuanpeng, Zhou Ta, Lam Yu Lap, Peng Tao, Qin Jing, Zhao Lina, Cai Jing, Ren Ge
Department of Health Technology and Informatics, The Hong Kong Polytechnic University, 999077, Hong Kong SAR.
Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xian 710032, China.
Comput Med Imaging Graph. 2025 Apr;121:102487. doi: 10.1016/j.compmedimag.2024.102487. Epub 2025 Jan 26.
Thoracic Cone-beam computed tomography (CBCT) is routinely collected during image-guided radiation therapy (IGRT) to provide updated patient anatomy information for lung cancer treatments. However, CBCT images often suffer from streaking artifacts and noise caused by under-rate sampling projections and low-dose exposure, resulting in loss of lung anatomy which contains crucial pulmonary tumorous and functional information. While recent deep learning-based CBCT enhancement methods have shown promising results in suppressing artifacts, they have limited performance on preserving anatomical details containing crucial tumorous information due to lack of targeted guidance. To address this issue, we propose a novel feature-targeted deep learning framework which generates ultra-quality pulmonary imaging from CBCT of lung cancer patients via a multi-task customized feature-to-feature perceptual loss function and a feature-guided CycleGAN. The framework comprises two main components: a multi-task learning feature-selection network (MTFS-Net) for building up a customized feature-to-feature perceptual loss function (CFP-loss); and a feature-guided CycleGan network. Our experiments showed that the proposed framework can generate synthesized CT (sCT) images for the lung that achieved a high similarity to CT images, with an average SSIM index of 0.9747 and an average PSNR index of 38.5995 globally, and an average Pearman's coefficient of 0.8929 within the tumor region on multi-institutional datasets. The sCT images also achieved visually pleasing performance with effective artifacts suppression, noise reduction, and distinctive anatomical details preservation. Functional imaging tests further demonstrated the pulmonary texture correction performance of the sCT images, and the similarity of the functional imaging generated from sCT and CT images has reached an average DSC value of 0.9147, SCC value of 0.9615 and R value of 0.9661. Comparison experiments with pixel-to-pixel loss also showed that the proposed perceptual loss significantly enhances the performance of involved generative models. Our experiment results indicate that the proposed framework outperforms the state-of-the-art models for pulmonary CBCT enhancement. This framework holds great promise for generating high-quality pulmonary imaging from CBCT that is suitable for supporting further analysis of lung cancer treatment.
在图像引导放射治疗(IGRT)期间,通常会采集胸部锥形束计算机断层扫描(CBCT),以提供用于肺癌治疗的最新患者解剖信息。然而,CBCT图像经常受到欠采样投影和低剂量曝光导致的条纹伪影和噪声的影响,导致包含关键肺部肿瘤和功能信息的肺部解剖结构丢失。虽然最近基于深度学习的CBCT增强方法在抑制伪影方面显示出了有前景的结果,但由于缺乏针对性的指导,它们在保留包含关键肿瘤信息的解剖细节方面性能有限。为了解决这个问题,我们提出了一种新颖的特征靶向深度学习框架,该框架通过多任务定制的特征到特征感知损失函数和特征引导的CycleGAN,从肺癌患者的CBCT生成超高质量的肺部图像。该框架包括两个主要组件:一个用于构建定制的特征到特征感知损失函数(CFP-loss)的多任务学习特征选择网络(MTFS-Net);以及一个特征引导的CycleGan网络。我们的实验表明,所提出的框架可以生成与CT图像具有高度相似性的肺部合成CT(sCT)图像,在多机构数据集中,全局平均结构相似性指数(SSIM)为0.9747,平均峰值信噪比(PSNR)指数为38.5995,肿瘤区域内的平均皮尔逊系数为0.8929。sCT图像在有效抑制伪影、降低噪声和保留独特解剖细节方面也取得了视觉上令人满意的效果。功能成像测试进一步证明了sCT图像的肺纹理校正性能,并且从sCT和CT图像生成的功能成像的相似性已达到平均骰子相似系数(DSC)值为0.9147、结构相似系数(SCC)值为0.9615和相关系数(R)值为0.9661。与逐像素损失的比较实验还表明,所提出的感知损失显著提高了所涉及生成模型的性能。我们的实验结果表明,所提出的框架优于用于肺部CBCT增强的现有模型。该框架在从CBCT生成高质量肺部图像方面具有很大的前景,适用于支持肺癌治疗的进一步分析。