Ahmed Abdella M, Madden Levi, Stewart Maegan, Chow Brian V Y, Mylonas Adam, Brown Ryan, Metz Gabrielle, Shepherd Meegan, Coronel Carlito, Ambrose Leigh, Turk Alex, Crispin Maiko, Kneebone Andrew, Hruby George, Keall Paul, Booth Jeremy T
Northern Sydney Cancer Centre, Royal North Shore Hospital, Reserve Rd, St Leonards, NSW 2065, Australia.
School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Camperdown, NSW 2050, Australia.
Phys Imaging Radiat Oncol. 2025 Jun 6;35:100794. doi: 10.1016/j.phro.2025.100794. eCollection 2025 Jul.
In pancreatic stereotactic body radiotherapy (SBRT), accurate motion management is crucial for the safe delivery of high doses per fraction. Intra-fraction tracking with magnetic resonance imaging-guidance for gated SBRT has shown potential for improved local control. Visualisation of pancreas (and surrounding organs) remains challenging in intra-fraction kilo-voltage (kV) imaging, requiring implanted fiducials. In this study, we investigate patient-specific deep-learning approaches to track the gross-tumour-volume (GTV), pancreas-head and the whole-pancreas in intra-fraction kV images.
Conditional-generative-adversarial-networks were trained and tested on data from 25 patients enrolled in an ethics-approved pancreatic SBRT trial for contour prediction on intra-fraction 2D kV images. Labelled digitally-reconstructed-radiographs (DRRs) were generated from contoured planning-computed-tomography (CTs) (CT-DRRs) and cone-beam-CTs (CBCT-DRRs). A population model was trained using CT-DRRs of 19 patients. Two patient-specific model types were created for six additional patients by fine-tuning the population model using CBCT-DRRs (CBCT-models) or CT-DRRs (CT-models) acquired in exhale-breath-hold. Model predictions on unseen triggered-kV images from the corresponding six patients were evaluated against projected-contours using Dice-Similarity-Coefficient (DSC), centroid-error (CE), average Hausdorff-distance (AHD), and Hausdorff-distance at 95th-percentile (HD95).
The mean ± 1SD (standard-deviation) DSCs were 0.86 ± 0.09 (CBCT-models) and 0.78 ± 0.12 (CT-models). For AHD and CE, the CBCT-model predicted contours within 2.0 mm ≥90.3 % of the time, while HD95 was within 5.0 mm ≥90.0 % of the time, and had a prediction time of 29.2 ± 3.7 ms per contour.
The patient-specific CBCT-models outperformed the CT-models and predicted the three contours with 90th-percentile error ≤2.0 mm, indicating the potential for clinical real-time application.
在胰腺立体定向体部放射治疗(SBRT)中,精确的运动管理对于安全地每次分割给予高剂量至关重要。磁共振成像引导的门控SBRT的分次内追踪已显示出改善局部控制的潜力。在分次内千伏(kV)成像中,胰腺(及周围器官)的可视化仍然具有挑战性,需要植入基准标记。在本研究中,我们研究了针对患者的深度学习方法,以在分次内kV图像中追踪大体肿瘤体积(GTV)、胰头和整个胰腺。
对来自一项经伦理批准的胰腺SBRT试验的25例患者的数据进行条件生成对抗网络的训练和测试,用于在分次内二维kV图像上进行轮廓预测。从轮廓化的计划计算机断层扫描(CT)(CT-DRRs)和锥形束CT(CBCT-DRRs)生成标记的数字重建射线照片(DRRs)。使用19例患者的CT-DRRs训练一个群体模型。通过使用呼气屏气时获取的CBCT-DRRs(CBCT模型)或CT-DRRs(CT模型)对群体模型进行微调,为另外6例患者创建了两种针对患者的模型类型。使用骰子相似系数(DSC)、质心误差(CE)、平均豪斯多夫距离(AHD)和第95百分位数的豪斯多夫距离(HD95),将相应6例患者未见过的触发kV图像上的模型预测与投影轮廓进行比较评估。
CBCT模型的平均±1标准差(SD)DSC为0.86±0.09,CT模型为0.78±0.12。对于AHD和CE,CBCT模型在≥90.3%的时间内预测轮廓在2.0mm以内,而HD95在≥90.0%的时间内处于5.0mm以内,每个轮廓的预测时间为29.2±3.7ms。
针对患者的CBCT模型优于CT模型,并且以第90百分位数误差≤2.0mm预测了三个轮廓,表明具有临床实时应用的潜力。