Yin Ping, Chen Weidao, Fan Qianrui, Yu Ruize, Liu Xia, Liu Tao, Wang Dawei, Hong Nan
Department of Radiology, Peking University People's Hospital, 11 Xizhimen Nandajie, Xicheng District, Beijing, 100044, P. R. China.
Institute of Research, InferVision, Ocean International Center, Chaoyang District, Beijing, 100025, China.
Cancer Imaging. 2025 Mar 13;25(1):34. doi: 10.1186/s40644-025-00850-8.
Accurate segmentation of pelvic and sacral tumors (PSTs) in multi-sequence magnetic resonance imaging (MRI) is essential for effective treatment and surgical planning.
To develop a deep learning (DL) framework for efficient segmentation of PSTs from multi-sequence MRI.
This study included a total of 616 patients with pathologically confirmed PSTs between April 2011 to May 2022. We proposed a practical DL framework that integrates a 2.5D U-net and MobileNetV2 for automatic PST segmentation with a fast annotation strategy across multiple MRI sequences, including T1-weighted (T1-w), T2-weighted (T2-w), diffusion-weighted imaging (DWI), and contrast-enhanced T1-weighted (CET1-w). Two distinct models, the All-sequence segmentation model and the T2-fusion segmentation model, were developed. During the implementation of our DL models, all regions of interest (ROIs) in the training set were coarse labeled, and ROIs in the test set were fine labeled. Dice score and intersection over union (IoU) were used to evaluate model performance.
The 2.5D MobileNetV2 architecture demonstrated improved segmentation performance compared to 2D and 3D U-Net models, with a Dice score of 0.741 and an IoU of 0.615. The All-sequence model, which was trained using a fusion of four MRI sequences (T1-w, CET1-w, T2-w, and DWI), exhibited superior performance with Dice scores of 0.659 for T1-w, 0.763 for CET1-w, 0.819 for T2-w, and 0.723 for DWI as inputs. In contrast, the T2-fusion segmentation model, which used T2-w and CET1-w sequences as inputs, achieved a Dice score of 0.833 and an IoU value of 0.719.
In this study, we developed a practical DL framework for PST segmentation via multi-sequence MRI, which reduces the dependence on data annotation. These models offer solutions for various clinical scenarios and have significant potential for wide-ranging applications.
在多序列磁共振成像(MRI)中准确分割盆腔和骶骨肿瘤(PST)对于有效治疗和手术规划至关重要。
开发一种深度学习(DL)框架,用于从多序列MRI中高效分割PST。
本研究共纳入2011年4月至2022年5月间616例经病理证实的PST患者。我们提出了一种实用的DL框架,该框架集成了2.5D U-net和MobileNetV2,用于跨多个MRI序列(包括T1加权(T1-w)、T2加权(T2-w)、扩散加权成像(DWI)和对比增强T1加权(CET1-w))进行自动PST分割,并采用快速标注策略。开发了两种不同的模型,即全序列分割模型和T2融合分割模型。在我们的DL模型实施过程中,训练集中的所有感兴趣区域(ROI)进行了粗略标注,测试集中的ROI进行了精细标注。采用Dice分数和交并比(IoU)来评估模型性能。
与2D和3D U-Net模型相比,2.5D MobileNetV2架构的分割性能有所提高,Dice分数为0.741,IoU为0.615。使用四个MRI序列(T1-w、CET1-w、T2-w和DWI)融合训练的全序列模型表现优异,以T1-w、CET1-w、T2-w和DWI作为输入时,Dice分数分别为0.659、0.763、0.819和0.723。相比之下,以T2-w和CET1-w序列作为输入的T2融合分割模型,Dice分数为0.833,IoU值为0.719。
在本研究中,我们开发了一种通过多序列MRI进行PST分割的实用DL框架,减少了对数据标注的依赖。这些模型为各种临床场景提供了解决方案,具有广泛应用的巨大潜力。