Liu Xi, Yang Ruijie, Xiong Tianyu, Yang Xueying, Li Wen, Song Liming, Zhu Jiarui, Wang Mingqing, Cai Jing, Geng Lisheng
School of Physics, Beihang University, Beijing 102206, China.
Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China.
Cancers (Basel). 2023 Nov 20;15(22):5479. doi: 10.3390/cancers15225479.
To develop a deep learning framework based on a hybrid dataset to enhance the quality of CBCT images and obtain accurate HU values.
A total of 228 cervical cancer patients treated in different LINACs were enrolled. We developed an encoder-decoder architecture with residual learning and skip connections. The model was hierarchically trained and validated on 5279 paired CBCT/planning CT images and tested on 1302 paired images. The mean absolute error (MAE), peak signal to noise ratio (PSNR), and structural similarity index (SSIM) were utilized to access the quality of the synthetic CT images generated by our model.
The MAE between synthetic CT images generated by our model and planning CT was 10.93 HU, compared to 50.02 HU for the CBCT images. The PSNR increased from 27.79 dB to 33.91 dB, and the SSIM increased from 0.76 to 0.90. Compared with synthetic CT images generated by the convolution neural networks with residual blocks, our model had superior performance both in qualitative and quantitative aspects.
Our model could synthesize CT images with enhanced image quality and accurate HU values. The synthetic CT images preserved the edges of tissues well, which is important for downstream tasks in adaptive radiotherapy.
开发一种基于混合数据集的深度学习框架,以提高锥形束计算机断层扫描(CBCT)图像的质量并获得准确的HU值。
共纳入228例在不同直线加速器治疗的宫颈癌患者。我们开发了一种具有残差学习和跳跃连接的编码器-解码器架构。该模型在5279对CBCT/计划CT图像上进行分层训练和验证,并在1302对图像上进行测试。利用平均绝对误差(MAE)、峰值信噪比(PSNR)和结构相似性指数(SSIM)来评估我们模型生成的合成CT图像的质量。
我们模型生成的合成CT图像与计划CT之间的MAE为10.93 HU,而CBCT图像为50.02 HU。PSNR从27.79 dB提高到33.91 dB,SSIM从0.76提高到0.90。与具有残差块的卷积神经网络生成的合成CT图像相比,我们的模型在定性和定量方面均具有卓越性能。
我们模型能够合成图像质量增强且HU值准确的CT图像。合成CT图像很好地保留了组织边缘,这对于自适应放疗中的下游任务很重要。