Suppr超能文献

比较基于深度学习的头颈部肿瘤分割的不同 CT、PET 和 MRI 多模态图像组合。

Comparing different CT, PET and MRI multi-modality image combinations for deep learning-based head and neck tumor segmentation.

机构信息

Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.

Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark.

出版信息

Acta Oncol. 2021 Nov;60(11):1399-1406. doi: 10.1080/0284186X.2021.1949034. Epub 2021 Jul 15.

Abstract

BACKGROUND

Manual delineation of gross tumor volume (GTV) is essential for radiotherapy treatment planning, but it is time-consuming and suffers inter-observer variability (IOV). In clinics, CT, PET, and MRI are used to inform delineation accuracy due to their different complementary characteristics. This study aimed to investigate deep learning to assist GTV delineation in head and neck squamous cell carcinoma (HNSCC) by comparing various modality combinations.

MATERIALS AND METHODS

This retrospective study had 153 patients with multiple sites of HNSCC including their planning CT, PET, and MRI (T1-weighted and T2-weighted). Clinical delineations of gross tumor volume (GTV-T) and involved lymph nodes (GTV-N) were collected as the ground truth. The dataset was randomly divided into 92 patients for training, 31 for validation, and 30 for testing. We applied a residual 3 D UNet as the deep learning architecture. We independently trained the UNet with four different modality combinations (CT-PET-MRI, CT-MRI, CT-PET, and PET-MRI). Additionally, analogical to post-processing, an average fusion of three bi-modality combinations (CT-PET, CT-MRI, and PET-MRI) was produced as an ensemble. Segmentation accuracy was evaluated on the test set, using Dice similarity coefficient (Dice), Hausdorff Distance 95 percentile (HD95), and Mean Surface Distance (MSD).

RESULTS

All imaging combinations including PET provided similar average scores in range of Dice: 0.72-0.74, HD95: 8.8-9.5 mm, MSD: 2.6-2.8 mm. Only CT-MRI had a lower score with Dice: 0.58, HD95: 12.9 mm, MSD: 3.7 mm. The average of three bi-modality combinations reached Dice: 0.74, HD95: 7.9 mm, MSD: 2.4 mm.

CONCLUSION

Multimodal deep learning-based auto segmentation of HNSCC GTV was demonstrated and inclusion of the PET image was shown to be crucial. Training on combined MRI, PET, and CT data provided limited improvements over CT-PET and PET-MRI. However, when combining three bimodal trained networks into an ensemble, promising improvements were shown.

摘要

背景

手动勾画大体肿瘤体积(GTV)对于放射治疗计划至关重要,但耗时且存在观察者间变异性(IOV)。在临床上,由于具有不同的互补特征,CT、PET 和 MRI 用于为勾画准确性提供信息。本研究旨在通过比较各种模态组合,探讨深度学习辅助头颈部鳞状细胞癌(HNSCC)GTV 勾画。

材料和方法

这项回顾性研究纳入了 153 例多部位 HNSCC 患者,包括其计划 CT、PET 和 MRI(T1 加权和 T2 加权)。收集了临床 GTV-T 和受累淋巴结(GTV-N)的勾画作为金标准。数据集随机分为 92 例用于训练、31 例用于验证和 30 例用于测试。我们应用了残差 3D UNet 作为深度学习架构。我们分别使用 4 种不同模态组合(CT-PET-MRI、CT-MRI、CT-PET 和 PET-MRI)独立训练 UNet。此外,类似于后处理,生成了三种双模态组合(CT-PET、CT-MRI 和 PET-MRI)的平均融合作为集合。在测试集上评估分割准确性,使用 Dice 相似系数(Dice)、Hausdorff 距离 95%(HD95)和平均表面距离(MSD)。

结果

包括 PET 的所有成像组合在 Dice 评分范围内提供了相似的平均分数:0.72-0.74,HD95:8.8-9.5mm,MSD:2.6-2.8mm。只有 CT-MRI 的评分较低,Dice:0.58,HD95:12.9mm,MSD:3.7mm。三种双模态组合的平均值达到 Dice:0.74,HD95:7.9mm,MSD:2.4mm。

结论

证明了基于多模态深度学习的 HNSCC GTV 自动勾画,并且纳入 PET 图像至关重要。在联合 MRI、PET 和 CT 数据上进行训练仅对 CT-PET 和 PET-MRI 提供了有限的改进。然而,当将三个双模态训练网络组合成一个集合时,显示出了有希望的改进。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验