Department of Radiology, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea.
Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Centre, 43 Olympic-Ro 88, Songpa-Gu, Seoul, 05505, Republic of Korea.
Eur Radiol. 2024 Aug;34(8):5389-5400. doi: 10.1007/s00330-024-10585-y. Epub 2024 Jan 20.
To evaluate deep learning-based segmentation models for oropharyngeal squamous cell carcinoma (OPSCC) using CT and MRI with nnU-Net.
This single-center retrospective study included 91 patients with OPSCC. The patients were grouped into the development (n = 56), test 1 (n = 13), and test 2 (n = 22) cohorts. In the development cohort, OPSCC was manually segmented on CT, MR, and co-registered CT-MR images, which served as the ground truth. The multimodal and multichannel input images were then trained using a self-configuring nnU-Net. For evaluation metrics, dice similarity coefficient (DSC) and mean Hausdorff distance (HD) were calculated for test cohorts. Pearson's correlation and Bland-Altman analyses were performed between ground truth and prediction volumes. Intraclass correlation coefficients (ICCs) of radiomic features were calculated for reproducibility assessment.
All models achieved robust segmentation performances with DSC of 0.64 ± 0.33 (CT), 0.67 ± 0.27 (MR), and 0.65 ± 0.29 (CT-MR) in test cohort 1 and 0.57 ± 0.31 (CT), 0.77 ± 0.08 (MR), and 0.73 ± 0.18 (CT-MR) in test cohort 2. No significant differences were found in DSC among the models. HD of CT-MR (1.57 ± 1.06 mm) and MR models (1.36 ± 0.61 mm) were significantly lower than that of the CT model (3.48 ± 5.0 mm) (p = 0.037 and p = 0.014, respectively). The correlation coefficients between the ground truth and prediction volumes for CT, MR, and CT-MR models were 0.88, 0.93, and 0.9, respectively. MR models demonstrated excellent mean ICCs of radiomic features (0.91-0.93).
The self-configuring nnU-Net demonstrated reliable and accurate segmentation of OPSCC on CT and MRI. The multimodal CT-MR model showed promising results for the simultaneous segmentation on CT and MRI.
Deep learning-based automatic detection and segmentation of oropharyngeal squamous cell carcinoma on pre-treatment CT and MRI would facilitate radiologic response assessment and radiotherapy planning.
• The nnU-Net framework produced a reliable and accurate segmentation of OPSCC on CT and MRI. • MR and CT-MR models showed higher DSC and lower Hausdorff distance than the CT model. • Correlation coefficients between the ground truth and predicted segmentation volumes were high in all the three models.
使用 nnU-Net 评估基于深度学习的口咽鳞状细胞癌(OPSCC)CT 和 MRI 分割模型。
这是一项单中心回顾性研究,纳入了 91 例 OPSCC 患者。患者分为开发(n=56)、测试 1(n=13)和测试 2(n=22)队列。在开发队列中,使用手动在 CT、MR 和配准的 CT-MR 图像上对 OPSCC 进行分割,作为金标准。然后,使用自配置的 nnU-Net 对多模态和多通道输入图像进行训练。对于评估指标,在测试队列中计算了 Dice 相似系数(DSC)和平均 Hausdorff 距离(HD)。对金标准和预测体积之间进行 Pearson 相关和 Bland-Altman 分析。对放射组学特征的组内相关系数(ICC)进行计算以评估重现性。
所有模型均实现了稳健的分割性能,在测试队列 1 中,CT 的 DSC 为 0.64±0.33,MR 为 0.67±0.27,CT-MR 为 0.65±0.29,在测试队列 2 中,CT 的 DSC 为 0.57±0.31,MR 为 0.77±0.08,CT-MR 为 0.73±0.18。模型之间的 DSC 无显著差异。CT-MR(1.57±1.06mm)和 MR 模型(1.36±0.61mm)的 HD 明显低于 CT 模型(3.48±5.0mm)(p=0.037 和 p=0.014)。CT、MR 和 CT-MR 模型的金标准与预测体积之间的相关系数分别为 0.88、0.93 和 0.9。MR 模型的放射组学特征的平均 ICC 显示出极好的效果(0.91-0.93)。
自配置的 nnU-Net 可在 CT 和 MRI 上可靠且准确地对 OPSCC 进行分割。多模态 CT-MR 模型在 CT 和 MRI 上同时分割具有良好的应用前景。
基于深度学习的口咽鳞状细胞癌在治疗前 CT 和 MRI 上的自动检测和分割将有助于放射学反应评估和放射治疗计划。
nnU-Net 框架对头颈癌在 CT 和 MRI 上的分割具有较高的可靠性和准确性。
与 CT 模型相比,MR 和 CT-MR 模型具有更高的 DSC 和更低的 Hausdorff 距离。
在所有三个模型中,金标准与预测分割体积之间的相关系数都很高。