Ramachandran Prabhakar, Anderson Darcie, Colbert Zachery, Arrington Daniel, Huo Michael, Pinkham Mark B, Foote Matthew, Fielding Andrew
Department of Radiation Oncology, Cancer Services, Princess Alexandra Hospital, Queensland, Australia.
School of Chemistry and Physics, Queensland University of Technology, Brisbane, Queensland, Australia.
J Med Phys. 2025 Jan-Mar;50(1):30-37. doi: 10.4103/jmp.jmp_140_24. Epub 2025 Mar 24.
The study aims to develop a modified Pix2Pix convolutional neural network framework to enhance the quality of cone-beam computed tomography (CBCT) images. It also seeks to reduce the Hounsfield unit (HU) variations, making CBCT images closely resemble the internal anatomy as depicted in computed tomography (CT) images.
We used datasets from 50 patients who underwent Gamma Knife treatment to develop a deep learning model that translates CBCT images into high-quality synthetic CT (sCT) images. Paired CBCT and ground truth CT images from 40 patients were used for training and 10 for testing on 7484 slices of 512 × 512 pixels with the Pix2Pix model. The sCT images were evaluated against ground truth CT scans using image quality assessment metrics, including the structural similarity index (SSIM), mean absolute error (MAE), root mean square error (RMSE), peak signal-to-noise ratio (PSNR), normalized cross-correlation, and dice similarity coefficient.
The results demonstrate significant improvements in image quality when comparing sCT images to CBCT, with SSIM increasing from 0.85 ± 0.05 to 0.95 ± 0.03 and MAE dropping from 77.37 ± 20.05 to 18.81 ± 7.22 ( < 0.0001 for both). PSNR and RMSE also improved, from 26.50 ± 1.72 to 30.76 ± 2.23 and 228.52 ± 53.76 to 82.30 ± 23.81, respectively ( < 0.0001).
The sCT images show reduced noise and artifacts, closely matching CT in HU values, and demonstrate a high degree of similarity to CT images, highlighting the potential of deep learning to significantly improve CBCT image quality for radiosurgery applications.
本研究旨在开发一种改进的Pix2Pix卷积神经网络框架,以提高锥束计算机断层扫描(CBCT)图像的质量。研究还试图减少亨氏单位(HU)的变化,使CBCT图像与计算机断层扫描(CT)图像所描绘的内部解剖结构极为相似。
我们使用了50例接受伽玛刀治疗患者的数据集来开发一个深度学习模型,该模型可将CBCT图像转换为高质量的合成CT(sCT)图像。使用Pix2Pix模型,来自40例患者的配对CBCT和真实CT图像用于训练,10例用于在512×512像素的7484个切片上进行测试。使用包括结构相似性指数(SSIM)、平均绝对误差(MAE)、均方根误差(RMSE)、峰值信噪比(PSNR)、归一化互相关和骰子相似系数在内的图像质量评估指标,将sCT图像与真实CT扫描进行比较评估。
结果表明,将sCT图像与CBCT进行比较时,图像质量有显著改善,SSIM从0.85±0.05提高到0.95±0.03,MAE从77.37±20.05降至18.81±7.22(两者均P<0.0001)。PSNR和RMSE也有所改善,分别从26.50±1.72提高到30.76±2.23,从228.52±53.76降至82.30±23.81(P<0.0001)。
sCT图像显示出噪声和伪影减少,HU值与CT密切匹配,并与CT图像显示出高度相似性,突出了深度学习在显著提高放射外科应用中CBCT图像质量方面的潜力。