Hu Ying, Cheng Mengjie, Wei Hui, Liang Zhiwen
School of Mathematics and Statistics, Hubei University of Education, Wuhan, Hubei, China.
Bigdata Modeling and Intelligent Computing Research Institute, Hubei University of Education, Wuhan, Hubei, China.
Front Oncol. 2024 Aug 8;14:1440944. doi: 10.3389/fonc.2024.1440944. eCollection 2024.
Cone-beam computed tomography (CBCT) is a convenient method for adaptive radiation therapy (ART), but its application is often hindered by its image quality. We aim to develop a unified deep learning model that can consistently enhance the quality of CBCT images across various anatomical sites by generating synthetic CT (sCT) images.
A dataset of paired CBCT and planning CT images from 135 cancer patients, including head and neck, chest and abdominal tumors, was collected. This dataset, with its rich anatomical diversity and scanning parameters, was carefully selected to ensure comprehensive model training. Due to the imperfect registration, the inherent challenge of local structural misalignment of paired dataset may lead to suboptimal model performance. To address this limitation, we propose SynREG, a supervised learning framework. SynREG integrates a hybrid CNN-transformer architecture designed for generating high-fidelity sCT images and a registration network designed to correct local structural misalignment dynamically during training. An independent test set of 23 additional patients was used to evaluate the image quality, and the results were compared with those of several benchmark models (pix2pix, cycleGAN and SwinIR). Furthermore, the performance of an autosegmentation application was also assessed.
The proposed model disentangled sCT generation from anatomical correction, leading to a more rational optimization process. As a result, the model effectively suppressed noise and artifacts in multisite applications, significantly enhancing CBCT image quality. Specifically, the mean absolute error (MAE) of SynREG was reduced to 16.81 ± 8.42 HU, whereas the structural similarity index (SSIM) increased to 94.34 ± 2.85%, representing improvements over the raw CBCT data, which had the MAE of 26.74 ± 10.11 HU and the SSIM of 89.73 ± 3.46%. The enhanced image quality was particularly beneficial for organs with low contrast resolution, significantly increasing the accuracy of automatic segmentation in these regions. Notably, for the brainstem, the mean Dice similarity coefficient (DSC) increased from 0.61 to 0.89, and the MDA decreased from 3.72 mm to 0.98 mm, indicating a substantial improvement in segmentation accuracy and precision.
SynREG can effectively alleviate the differences in residual anatomy between paired datasets and enhance the quality of CBCT images.
锥形束计算机断层扫描(CBCT)是一种适用于自适应放射治疗(ART)的便捷方法,但其应用常常受到图像质量的限制。我们旨在开发一种统一的深度学习模型,通过生成合成CT(sCT)图像,持续提高不同解剖部位CBCT图像的质量。
收集了135例癌症患者的CBCT与计划CT图像配对数据集,包括头颈部、胸部和腹部肿瘤。该数据集具有丰富的解剖多样性和扫描参数,经过精心挑选以确保模型得到全面训练。由于配准不完善,配对数据集局部结构错位这一固有挑战可能导致模型性能欠佳。为解决这一局限性,我们提出了SynREG,一种监督学习框架。SynREG集成了一个用于生成高保真sCT图像的混合卷积神经网络 - 变压器架构和一个用于在训练期间动态校正局部结构错位的配准网络。使用另外23例患者的独立测试集评估图像质量,并将结果与几个基准模型(pix2pix、cycleGAN和SwinIR)的结果进行比较。此外,还评估了自动分割应用的性能。
所提出的模型将sCT生成与解剖校正解耦,从而实现了更合理的优化过程。结果,该模型在多部位应用中有效抑制了噪声和伪影,显著提高了CBCT图像质量。具体而言,SynREG的平均绝对误差(MAE)降至16.81±8.42 HU,而结构相似性指数(SSIM)增至94.34±2.85%,相较于原始CBCT数据(MAE为26.74±10.11 HU,SSIM为89.73±3.46%)有显著提升。增强后的图像质量对低对比度分辨率的器官尤为有益,显著提高了这些区域自动分割的准确性。值得注意的是,对于脑干,平均骰子相似系数(DSC)从0.61增至0.89,平均绝对偏差(MDA)从3.72 mm降至0.98 mm,表明分割准确性和精确性有了大幅提高。
SynREG能够有效缓解配对数据集之间残余解剖结构的差异,提高CBCT图像质量。