Suppr超能文献

前瞻性验证的深度学习模型,用于 CT 中吞咽和咀嚼结构的分割。

Prospectively-validated deep learning model for segmenting swallowing and chewing structures in CT.

机构信息

Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America.

Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, United States of America.

出版信息

Phys Med Biol. 2022 Jan 17;67(2). doi: 10.1088/1361-6560/ac4000.

Abstract

Delineating swallowing and chewing structures aids in radiotherapy (RT) treatment planning to limit dysphagia, trismus, and speech dysfunction. We aim to develop an accurate and efficient method to automate this process.CT scans of 242 head and neck (H&N) cancer patients acquired from 2004 to 2009 at our institution were used to develop auto-segmentation models for the masseters, medial pterygoids, larynx, and pharyngeal constrictor muscle using DeepLabV3+. A cascaded framework was used, wherein models were trained sequentially to spatially constrain each structure group based on prior segmentations. Additionally, an ensemble of models, combining contextual information from axial, coronal, and sagittal views was used to improve segmentation accuracy. Prospective evaluation was conducted by measuring the amount of manual editing required in 91 H&N CT scans acquired February-May 2021.. Medians and inter-quartile ranges of Dice similarity coefficients (DSC) computed on the retrospective testing set ( = 24) were 0.87 (0.85-0.89) for the masseters, 0.80 (0.79-0.81) for the medial pterygoids, 0.81 (0.79-0.84) for the larynx, and 0.69 (0.67-0.71) for the constrictor. Auto-segmentations, when compared to two sets of manual segmentations in 10 randomly selected scans, showed better agreement (DSC) with each observer than inter-observer DSC. Prospective analysis showed most manual modifications needed for clinical use were minor, suggesting auto-contouring could increase clinical efficiency. Trained segmentation models are available for research use upon request viahttps://github.com/cerr/CERR/wiki/Auto-Segmentation-models.We developed deep learning-based auto-segmentation models for swallowing and chewing structures in CT and demonstrated its potential for use in treatment planning to limit complications post-RT. To the best of our knowledge, this is the only prospectively-validated deep learning-based model for segmenting chewing and swallowing structures in CT. Segmentation models have been made open-source to facilitate reproducibility and multi-institutional research.

摘要

勾画吞咽和咀嚼结构有助于限制吞咽困难、牙关紧闭和言语功能障碍,从而辅助放射治疗(RT)计划制定。我们旨在开发一种准确且高效的方法来实现这一过程。我们使用 DeepLabV3+ 为来自我们机构的 2004 年至 2009 年间的 242 名头颈部(H&N)癌症患者的 CT 扫描开发了咀嚼肌、翼内肌、喉和咽缩肌的自动分割模型。我们使用级联框架,基于先验分割,依次训练模型,以空间限制每个结构组。此外,还使用了一个模型集合,该集合结合了来自轴位、冠状位和矢状位的上下文信息,以提高分割准确性。前瞻性评估通过测量 2021 年 2 月至 5 月采集的 91 例 H&N CT 扫描中所需的手动编辑量来进行。在回顾性测试集(n=24)上计算的 Dice 相似系数(DSC)中位数和四分位距分别为:咀嚼肌 0.87(0.85-0.89),翼内肌 0.80(0.79-0.81),喉 0.81(0.79-0.84),咽缩肌 0.69(0.67-0.71)。与 10 例随机选择的扫描中的两组手动分割相比,自动分割与每个观察者的一致性(DSC)优于观察者间的 DSC。前瞻性分析表明,大多数需要用于临床的手动修改都是次要的,这表明自动勾画可以提高临床效率。经过训练的分割模型可应要求通过 https://github.com/cerr/CERR/wiki/Auto-Segmentation-models 获得,用于研究。我们开发了基于深度学习的 CT 吞咽和咀嚼结构自动分割模型,并证明了其在限制 RT 后并发症的治疗计划中的应用潜力。据我们所知,这是唯一一种经过前瞻性验证的基于深度学习的 CT 咀嚼和吞咽结构分割模型。分割模型已开源,以促进可重复性和多机构研究。

相似文献

2
Evaluating Automatic Segmentation for Swallowing-Related Organs for Head and Neck Cancer.
Technol Cancer Res Treat. 2022 Jan-Dec;21:15330338221105724. doi: 10.1177/15330338221105724.
8
Clinical validation of atlas-based auto-segmentation of multiple target volumes and normal tissue (swallowing/mastication) structures in the head and neck.
Int J Radiat Oncol Biol Phys. 2011 Nov 15;81(4):950-7. doi: 10.1016/j.ijrobp.2010.07.009. Epub 2010 Oct 6.
9
Clinical acceptability of automatically generated lymph node levels and structures of deglutition and mastication for head and neck radiation therapy.
Phys Imaging Radiat Oncol. 2024 Feb 1;29:100540. doi: 10.1016/j.phro.2024.100540. eCollection 2024 Jan.

引用本文的文献

1
Artificial intelligence in the diagnosis and management of dysphagia: a scoping review.
Codas. 2025 Aug 8;37(4):e20240305. doi: 10.1590/2317-1782/e20240305en. eCollection 2025.
2
Digital health technologies in swallowing care from screening to rehabilitation: A narrative review.
Auris Nasus Larynx. 2025 May 21;52(4):319-326. doi: 10.1016/j.anl.2025.05.002.
6
Dosimetric impact of adaptive proton therapy in head and neck cancer - A review.
Clin Transl Radiat Oncol. 2023 Feb 16;39:100598. doi: 10.1016/j.ctro.2023.100598. eCollection 2023 Mar.
7

本文引用的文献

2
Cardio-pulmonary substructure segmentation of radiotherapy computed tomography images using convolutional neural networks for clinical outcomes analysis.
Phys Imaging Radiat Oncol. 2020 Jun 10;14:61-66. doi: 10.1016/j.phro.2020.05.009. eCollection 2020 Apr.
3
Evaluation of measures for assessing time-saving of automatic organ-at-risk segmentation in radiotherapy.
Phys Imaging Radiat Oncol. 2019 Dec 17;13:1-6. doi: 10.1016/j.phro.2019.12.001. eCollection 2020 Jan.
4
Integrating cross-modality hallucinated MRI with CT to aid mediastinal lung tumor segmentation.
Med Image Comput Comput Assist Interv. 2019 Oct;11769:221-229. doi: 10.1007/978-3-030-32226-7_25. Epub 2019 Oct 10.
5
Library of deep-learning image segmentation and outcomes model-implementations.
Phys Med. 2020 May;73:190-196. doi: 10.1016/j.ejmp.2020.04.011. Epub 2020 May 1.
6
Deep learning-based auto-segmentation of targets and organs-at-risk for magnetic resonance imaging only planning of prostate radiotherapy.
Phys Imaging Radiat Oncol. 2019 Oct;12:80-86. doi: 10.1016/j.phro.2019.11.006. Epub 2019 Dec 12.
7
Deep Learning-Based Delineation of Head and Neck Organs at Risk: Geometric and Dosimetric Evaluation.
Int J Radiat Oncol Biol Phys. 2019 Jul 1;104(3):677-684. doi: 10.1016/j.ijrobp.2019.02.040. Epub 2019 Mar 2.
10
AnatomyNet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy.
Med Phys. 2019 Feb;46(2):576-589. doi: 10.1002/mp.13300. Epub 2018 Dec 17.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验