Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, USA.
Department of Radiological Sciences, University of California, Irvine, CA, USA.
Med Phys. 2021 Jun;48(6):2816-2826. doi: 10.1002/mp.14624. Epub 2021 May 14.
To improve image quality and computed tomography (CT) number accuracy of daily cone beam CT (CBCT) through a deep learning methodology with generative adversarial network.
One hundred and fifty paired pelvic CT and CBCT scans were used for model training and validation. An unsupervised deep learning method, 2.5D pixel-to-pixel generative adversarial network (GAN) model with feature mapping was proposed. A total of 12 000 slice pairs of CT and CBCT were used for model training, while ten-fold cross validation was applied to verify model robustness. Paired CT-CBCT scans from an additional 15 pelvic patients and 10 head-and-neck (HN) patients with CBCT images collected at a different machine were used for independent testing purpose. Besides the proposed method above, other network architectures were also tested as: 2D vs 2.5D; GAN model with vs without feature mapping; GAN model with vs without additional perceptual loss; and previously reported models as U-net and cycleGAN with or without identity loss. Image quality of deep-learning generated synthetic CT (sCT) images was quantitatively compared against the reference CT (rCT) image using mean absolute error (MAE) of Hounsfield units (HU) and peak signal-to-noise ratio (PSNR). The dosimetric calculation accuracy was further evaluated with both photon and proton beams.
The deep-learning generated sCTs showed improved image quality with reduced artifact distortion and improved soft tissue contrast. The proposed algorithm of 2.5 Pix2pix GAN with feature matching (FM) was shown to be the best model among all tested methods producing the highest PSNR and the lowest MAE to rCT. The dose distribution demonstrated a high accuracy in the scope of photon-based planning, yet more work is needed for proton-based treatment. Once the model was trained, it took 11-12 ms to process one slice, and could generate a 3D volume of dCBCT (80 slices) in less than a second using a NVIDIA GeForce GTX Titan X GPU (12 GB, Maxwell architecture).
The proposed deep learning algorithm is promising to improve CBCT image quality in an efficient way, thus has a potential to support online CBCT-based adaptive radiotherapy.
通过基于生成对抗网络的深度学习方法提高日常锥形束 CT(CBCT)的图像质量和 CT 数准确性。
使用 150 对骨盆 CT 和 CBCT 扫描进行模型训练和验证。提出了一种无监督的深度学习方法,即具有特征映射的 2.5D 像素到像素生成对抗网络(GAN)模型。总共使用 12000 对 CT 和 CBCT 切片对进行模型训练,同时应用十折交叉验证来验证模型的稳健性。另外 15 名骨盆患者和 10 名头部和颈部(HN)患者的额外 CBCT 图像,以及来自不同机器的 CBCT 图像,用于独立测试目的。除了上述方法外,还测试了其他网络架构,包括:2D 与 2.5D;具有与不具有特征映射的 GAN 模型;具有与不具有附加感知损失的 GAN 模型;以及之前报道的 U-net 和 cycleGAN 模型,具有或不具有身份损失。使用均方误差(MAE)的 Hounsfield 单位(HU)和峰值信噪比(PSNR)定量比较深度学习生成的合成 CT(sCT)图像的图像质量与参考 CT(rCT)图像。还进一步评估了光子和质子束的剂量计算准确性。
深度学习生成的 sCT 显示出改善的图像质量,具有减少的伪影失真和改善的软组织对比度。在所测试的方法中,2.5 Pix2pix GAN 与特征匹配(FM)的算法表现最佳,产生的 PSNR 最高,与 rCT 的 MAE 最低。在光子为基础的计划范围内,剂量分布表现出高精度,但质子为基础的治疗还需要更多的工作。一旦模型被训练,处理一个切片需要 11-12ms,使用 NVIDIA GeForce GTX Titan X GPU(12GB,Maxwell 架构)可以在不到一秒的时间内生成 3D 体积的 dCBCT(80 个切片)。
所提出的深度学习算法有望以有效的方式提高 CBCT 的图像质量,因此有潜力支持在线基于 CBCT 的自适应放疗。