Wang Zhixing, Shi Chengyu, Wong Carson, Oderinde Seyi M, Watkins William T, Qing Kun, Liu Bo, Williams Terence M, Liu An, Han Chunhui
Department of Radiation Oncology, City of Hope, Duarte, CA, USA.
RefleXion Medical, Hayward, CA, USA.
Technol Cancer Res Treat. 2025 Jan-Dec;24:15330338251344198. doi: 10.1177/15330338251344198. Epub 2025 May 21.
IntroductionThis study aims to evaluate auto-segmentation results using deep learning-based auto-segmentation models on different online CT imaging modalities in image-guided radiotherapy.MethodsPhantom studies were first performed to benchmark image quality. Daily CT images for sixty patients were retrospectively retrieved from fan-beam kilovoltage CT (kVCT), kV cone-beam CT (kV-CBCT), and megavoltage CT (MVCT) scans. For each imaging modality, half of the patients received CT scans in the pelvic region, while the other half in the thoracic region. Deep learning auto-segmentation models using a convolutional neural network algorithm were used to generate organs-at-risk contours. Quantitative metrics were calculated to compare auto-segmentation results with manual contours.ResultsThe auto-segmentation contours on kVCT images showed statistically significant difference in Dice similarity coefficient (DSC), Jaccard similarity coefficient, sensitivity index, inclusiveness index, and the 95 percentile Hausdorff distance, compared to those on kV-CBCT and MVCT images for most major organs. In the pelvic region, the largest difference in DSC was observed for the bowel volume with an average DSC of 0.84 ± 0.05, 0.35 ± 0.23, and 0.48 ± 0.27 for kVCT, kV-CBCT, and MVCT images, respectively (-value < 0.05); in the thoracic region, the largest difference in DSC was found for the esophagus with an average DSC of 0.63 ± 0.16, 0.18 ± 0.13, and 0.22 ± 0.08 for kVCT, kV-CBCT, and MVCT images, respectively (-value < 0.05).ConclusionDeep learning-based auto-segmentation models showed better agreement with manual contouring when using kVCT images compared to kV-CBCT or MVCT images. However, manual correction remains necessary after auto-segmentation with all imaging modalities, particularly for organs with limited contrast from surrounding tissues. These findings underscore the potential and limits in applying deep learning-based auto-segmentation models for adaptive radiotherapy.
引言
本研究旨在评估基于深度学习的自动分割模型在图像引导放射治疗中不同在线CT成像模式下的自动分割结果。
方法
首先进行体模研究以对图像质量进行基准测试。回顾性收集了60例患者的每日CT图像,这些图像来自扇形束千伏CT(kVCT)、千伏锥形束CT(kV-CBCT)和兆伏CT(MVCT)扫描。对于每种成像模式,一半患者接受盆腔区域的CT扫描,另一半接受胸部区域的CT扫描。使用基于卷积神经网络算法的深度学习自动分割模型生成危及器官轮廓。计算定量指标以将自动分割结果与手动轮廓进行比较。
结果
与kV-CBCT和MVCT图像相比,kVCT图像上的自动分割轮廓在大多数主要器官的骰子相似系数(DSC)、杰卡德相似系数、灵敏度指数、包容性指数和95百分位数豪斯多夫距离方面显示出统计学上的显著差异。在盆腔区域,肠道体积的DSC差异最大,kVCT、kV-CBCT和MVCT图像的平均DSC分别为0.84±0.05、0.35±0.23和0.48±0.27(P值<0.05);在胸部区域,食管的DSC差异最大,kVCT、kV-CBCT和MVCT图像的平均DSC分别为0.63±0.16、0.18±0.13和0.22±0.08(P值<0.05)。
结论
与kV-CBCT或MVCT图像相比,基于深度学习的自动分割模型在使用kVCT图像时与手动轮廓绘制的一致性更好。然而,在所有成像模式下进行自动分割后仍需要手动校正,特别是对于与周围组织对比度有限的器官。这些发现强调了在自适应放射治疗中应用基于深度学习的自动分割模型的潜力和局限性。