De Benetti Francesca, Delopoulos Nikolaos, Belka Claus, Corradini Stefanie, Navab Nassir, Wendler Thomas, Albarqouni Shadi, Landry Guillaume, Kurz Christopher
Chair for Computer-Aided Medical Procedures and Augmented Reality, Technical University of Munich, Garching, 85748, Germany.
Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, 81377, Germany.
Phys Imaging Radiat Oncol. 2025 Apr 17;34:100766. doi: 10.1016/j.phro.2025.100766. eCollection 2025 Apr.
Conventionally, the contours annotated during magnetic resonance-guided radiation therapy (MRgRT) planning are manually corrected during the RT fractions, which is a time-consuming task. Deep learning-based segmentation can be helpful, but the available patient-specific approaches require training at least one model per patient, which is computationally expensive. In this work, we introduced a novel framework that integrates fraction MR volumes and planning segmentation maps to generate robust fraction MR segmentations without the need for patient-specific retraining.
The dataset included 69 patients (222 fraction MRs in total) treated with MRgRT for abdominal cancers with a 0.35 T MR-Linac, and annotations for eight clinically relevant abdominal structures (aorta, bowel, duodenum, left kidney, right kidney, liver, spinal canal and stomach). In the framework, we implemented two alternative models capable of generating patient-specific segmentations using the planning segmentation as prior information. The first one is a 3D UNet with dual-channel input (i.e. fraction MR and planning segmentation map) and the second one is a modified 3D UNet with double encoder for the same two inputs.
On average, the two models with prior anatomical information outperformed the conventional population-based 3D UNet with an increase in Dice similarity coefficient . In particular, the dual-channel input 3D UNet outperformed the one with double encoder, especially when the alignment between the two input channels is satisfactory.
The proposed workflow was able to generate accurate patient-specific segmentations while avoiding training one model per patient and allowing for a seamless integration into clinical practice.
按照惯例,磁共振引导放射治疗(MRgRT)计划期间标注的轮廓在放疗分次过程中需人工校正,这是一项耗时的任务。基于深度学习的分割可能会有所帮助,但现有的针对特定患者的方法需要为每个患者至少训练一个模型,这在计算上成本很高。在这项工作中,我们引入了一种新颖的框架,该框架整合分次磁共振体积和计划分割图,以生成稳健的分次磁共振分割,而无需针对特定患者进行重新训练。
该数据集包括69例接受0.35 T MR直线加速器进行腹部癌症MRgRT治疗的患者(总共222次分次磁共振扫描),以及八个临床相关腹部结构(主动脉、肠、十二指肠、左肾、右肾、肝脏、椎管和胃)的标注。在该框架中,我们实现了两种替代模型,它们能够利用计划分割作为先验信息生成特定患者的分割。第一种是具有双通道输入(即分次磁共振和计划分割图)的3D U-Net,第二种是针对相同两个输入具有双编码器的改进型3D U-Net。
平均而言,具有先验解剖学信息的这两种模型优于传统的基于总体的3D U-Net,骰子相似系数有所增加。特别是,双通道输入的3D U-Net优于具有双编码器的模型,尤其是当两个输入通道之间的对齐令人满意时。
所提出的工作流程能够生成准确的特定患者分割,同时避免为每个患者训练一个模型,并允许无缝集成到临床实践中。