Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America.
Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States of America.
Phys Med Biol. 2023 Apr 13;68(9):095003. doi: 10.1088/1361-6560/acc721.
. CBCTs in image-guided radiotherapy provide crucial anatomy information for patient setup and plan evaluation. Longitudinal CBCT image registration could quantify the inter-fractional anatomic changes, e.g. tumor shrinkage, and daily OAR variation throughout the course of treatment. The purpose of this study is to propose an unsupervised deep learning-based CBCT-CBCT deformable image registration which enables quantitative anatomic variation analysis.The proposed deformable registration workflow consists of training and inference stages that share the same feed-forward path through a spatial transformation-based network (STN). The STN consists of a global generative adversarial network (GlobalGAN) and a local GAN (LocalGAN) to predict the coarse- and fine-scale motions, respectively. The network was trained by minimizing the image similarity loss and the deformable vector field (DVF) regularization loss without the supervision of ground truth DVFs. During the inference stage, patches of local DVF were predicted by the trained LocalGAN and fused to form a whole-image DVF. The local whole-image DVF was subsequently combined with the GlobalGAN generated DVF to obtain the final DVF. The proposed method was evaluated using 100 fractional CBCTs from 20 abdominal cancer patients in the experiments and 105 fractional CBCTs from a cohort of 21 different abdominal cancer patients in a holdout test.. Qualitatively, the registration results show good alignment between the deformed CBCT images and the target CBCT image. Quantitatively, the average target registration error calculated on the fiducial markers and manually identified landmarks was 1.91 ± 1.18 mm. The average mean absolute error, normalized cross correlation between the deformed CBCT and target CBCT were 33.42 ± 7.48 HU, 0.94 ± 0.04, respectively.. In summary, an unsupervised deep learning-based CBCT-CBCT registration method is proposed and its feasibility and performance in fractionated image-guided radiotherapy is investigated. This promising registration method could provide fast and accurate longitudinal CBCT alignment to facilitate inter-fractional anatomic changes analysis and prediction.
.CBCT 在图像引导放疗中提供了关键的解剖学信息,用于患者摆位和计划评估。纵向 CBCT 图像配准可以量化分次间的解剖变化,例如肿瘤缩小和整个治疗过程中每日 OAR 的变化。本研究旨在提出一种基于无监督深度学习的 CBCT-CBCT 可变形图像配准方法,用于定量分析解剖学变化。所提出的变形配准工作流程包括训练和推理阶段,它们通过基于空间变换的网络(STN)共享相同的前馈路径。STN 由全局生成对抗网络(GlobalGAN)和局部 GAN(LocalGAN)组成,分别预测粗尺度和细尺度运动。网络通过最小化图像相似性损失和变形向量场(DVF)正则化损失进行训练,而无需真实 DVF 的监督。在推理阶段,通过训练好的 LocalGAN 预测局部 DVF 的补丁,并将它们融合以形成全图像 DVF。然后将局部全图像 DVF 与 GlobalGAN 生成的 DVF 结合起来获得最终的 DVF。该方法通过在实验中使用来自 20 名腹部癌症患者的 100 个分次 CBCT 和在保留测试中使用来自 21 名不同腹部癌症患者的 105 个分次 CBCT 进行评估。定性上,配准结果显示变形 CBCT 图像与目标 CBCT 图像之间的对齐良好。定量上,在基准标记和手动识别的标记上计算的平均目标配准误差为 1.91 ± 1.18mm。变形 CBCT 和目标 CBCT 之间的平均平均绝对误差和归一化互相关分别为 33.42 ± 7.48HU 和 0.94 ± 0.04。总之,提出了一种基于无监督深度学习的 CBCT-CBCT 配准方法,并研究了其在分次图像引导放疗中的可行性和性能。这种有前途的配准方法可以提供快速准确的纵向 CBCT 对齐,以促进分次间解剖变化的分析和预测。