Suppr超能文献

编织注意力 U 网:一种新型的混合 CNN 和基于注意力的头颈部 CT 图像中危及器官分割方法。

Weaving attention U-net: A novel hybrid CNN and attention-based method for organs-at-risk segmentation in head and neck CT images.

机构信息

Department of Computer Science and Engineering, Washington University, St. Louis, Missouri, USA.

Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri, USA.

出版信息

Med Phys. 2021 Nov;48(11):7052-7062. doi: 10.1002/mp.15287. Epub 2021 Oct 26.

Abstract

PURPOSE

In radiotherapy planning, manual contouring is labor-intensive and time-consuming. Accurate and robust automated segmentation models improve the efficiency and treatment outcome. We aim to develop a novel hybrid deep learning approach, combining convolutional neural networks (CNNs) and the self-attention mechanism, for rapid and accurate multi-organ segmentation on head and neck computed tomography (CT) images.

METHODS

Head and neck CT images with manual contours of 115 patients were retrospectively collected and used. We set the training/validation/testing ratio to 81/9/25 and used the 10-fold cross-validation strategy to select the best model parameters. The proposed hybrid model segmented 10 organs-at-risk (OARs) altogether for each case. The performance of the model was evaluated by three metrics, that is, the Dice Similarity Coefficient (DSC), Hausdorff distance 95% (HD95), and mean surface distance (MSD). We also tested the performance of the model on the head and neck 2015 challenge dataset and compared it against several state-of-the-art automated segmentation algorithms.

RESULTS

The proposed method generated contours that closely resemble the ground truth for 10 OARs. On the head and neck 2015 challenge dataset, the DSC scores of these OARs were 0.91 0.02, 0.73 0.10, 0.95 0.03, 0.76 0.08, 0.79 0.05, 0.87 0.05, 0.86 0.08, 0.87 0.03, and 0.87 0.07 for brain stem, chiasm, mandible, left/right optic nerve, left/right submandibular, and left/right parotid, respectively. Our results of the new weaving attention U-net (WAU-net) demonstrate superior or similar performance on the segmentation of head and neck CT images.

CONCLUSIONS

We developed a deep learning approach that integrates the merits of CNNs and the self-attention mechanism. The proposed WAU-net can efficiently capture local and global dependencies and achieves state-of-the-art performance on the head and neck multi-organ segmentation task.

摘要

目的

在放射治疗计划中,手动轮廓绘制既费时又费力。准确且强大的自动化分割模型可提高效率和治疗效果。我们旨在开发一种新的混合深度学习方法,结合卷积神经网络(CNN)和自注意力机制,用于对头颈计算机断层扫描(CT)图像进行快速准确的多器官分割。

方法

回顾性收集了 115 名患者的头颈 CT 图像及其手动轮廓。我们将训练/验证/测试的比例设置为 81/9/25,并使用 10 折交叉验证策略来选择最佳模型参数。所提出的混合模型总共对每个病例的 10 个危及器官(OARs)进行了分割。通过三个度量标准评估模型的性能,即 Dice 相似系数(DSC)、Hausdorff 距离 95%(HD95)和平均表面距离(MSD)。我们还在头颈 2015 挑战赛数据集上测试了模型的性能,并将其与几种最先进的自动化分割算法进行了比较。

结果

所提出的方法生成的轮廓与 10 个 OAR 的真实轮廓非常相似。在头颈 2015 挑战赛数据集上,这些 OAR 的 DSC 分数分别为 0.91 0.02、0.73 0.10、0.95 0.03、0.76 0.08、0.79 0.05、0.87 0.05、0.86 0.08、0.87 0.03 和 0.87 0.07,分别为脑干、视交叉、下颌骨、左右视神经、左右下颌下腺和左右腮腺。我们对新的编织注意力 U 网(WAU-net)的结果表明,在对头颈 CT 图像的分割方面具有卓越或相似的性能。

结论

我们开发了一种深度学习方法,该方法结合了 CNN 和自注意力机制的优点。所提出的 WAU-net 可以有效地捕获局部和全局依赖关系,并在头颈多器官分割任务中达到了最先进的性能。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验