Shojaei Mehdi, Eiben Björn, McClelland Jamie R, Nill Simeon, Dunlop Alex, Hunt Arabella, Ng-Cheng-Hin Brian, Oelfke Uwe
Joint Department of Physics, Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London, United Kingdom.
UCL Hawkes Institute and Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom.
Phys Med Biol. 2025 Jan 30;70(3):035015. doi: 10.1088/1361-6560/adabac.
This study aims to develop and evaluate a fast and robust deep learning-based auto-segmentation approach for organs at risk in MRI-guided radiotherapy of pancreatic cancer to overcome the problems of time-intensive manual contouring in online adaptive workflows. The research focuses on implementing novel data augmentation techniques to address the challenges posed by limited datasets.This study was conducted in two phases. In phase I, we selected and customized the best-performing segmentation model among ResU-Net, SegResNet, and nnU-Net, using 43 balanced 3DVane images from 10 patients with 5-fold cross-validation. Phase II focused on optimizing the chosen model through two advanced data augmentation approaches to improve performance and generalizability by increasing the effective input dataset: (1) a novel structure-guided deformation-based augmentation approach (sgDefAug) and (2) a generative adversarial network-based method using a cycleGAN (GANAug). These were compared with comprehensive conventional augmentations (ConvAug). The approaches were evaluated using geometric (Dice score, average surface distance (ASD)) and dosimetric (D2% and D50% from dose-volume histograms) criteria.The nnU-Net framework demonstrated superior performance (mean Dice: 0.78 ± 0.10, mean ASD: 3.92 ± 1.94 mm) compared to other models. The sgDefAug and GANAug approaches significantly improved model performance over ConvAug, with sgDefAug demonstrating slightly superior results (mean Dice: 0.84 ± 0.09, mean ASD: 3.14 ± 1.79 mm). The proposed methodology produced auto-contours in under 30 s, with 75% of organs showing less than 1% difference in D2% and D50% dose criteria compared to ground truth.The integration of the nnU-Net framework with our proposed novel augmentation technique effectively addresses the challenges of limited datasets and stringent time constraints in online adaptive radiotherapy for pancreatic cancer. Our approach offers a promising solution for streamlining online adaptive workflows and represents a substantial step forward in the practical application of auto-segmentation techniques in clinical radiotherapy settings.
本研究旨在开发并评估一种快速且强大的基于深度学习的自动分割方法,用于胰腺癌磁共振成像引导放疗中的危及器官分割,以克服在线自适应工作流程中耗时的手动轮廓勾画问题。该研究重点在于实施新颖的数据增强技术,以应对有限数据集带来的挑战。本研究分两个阶段进行。在第一阶段,我们从ResU-Net、SegResNet和nnU-Net中选择并定制了性能最佳的分割模型,使用来自10名患者的43幅平衡的3DVane图像进行5折交叉验证。第二阶段专注于通过两种先进的数据增强方法优化所选模型,通过增加有效输入数据集来提高性能和泛化能力:(1)一种基于新颖的结构引导变形的数据增强方法(sgDefAug)和(2)一种使用循环生成对抗网络(cycleGAN)的基于生成对抗网络的方法(GANAug)。将这些方法与全面的传统增强方法(ConvAug)进行比较。使用几何标准(骰子系数、平均表面距离(ASD))和剂量学标准(剂量体积直方图中的D2%和D50%)对这些方法进行评估。与其他模型相比,nnU-Net框架表现出卓越的性能(平均骰子系数:0.78±0.10,平均ASD:3.92±1.94毫米)。sgDefAug和GANAug方法相比ConvAug显著提高了模型性能,sgDefAug显示出略优的结果(平均骰子系数:0.84±0.09,平均ASD:3.14±1.79毫米)。所提出的方法在30秒内生成自动轮廓,75%的器官在D2%和D50%剂量标准方面与真实情况的差异小于1%。nnU-Net框架与我们提出的新颖增强技术的集成有效地解决了胰腺癌在线自适应放疗中有限数据集和严格时间限制的挑战。我们的方法为简化在线自适应工作流程提供了一个有前景的解决方案,代表了自动分割技术在临床放疗环境中的实际应用向前迈出了重要一步。