Suppr超能文献

一种用于对头颈部癌中的危及器官进行分割的、由切片分类模型辅助的3D编码器-解码器网络。

A slice classification model-facilitated 3D encoder-decoder network for segmenting organs at risk in head and neck cancer.

作者信息

Zhang Shuming, Wang Hao, Tian Suqing, Zhang Xuyang, Li Jiaqi, Lei Runhong, Gao Mingze, Liu Chunlei, Yang Li, Bi Xinfang, Zhu Linlin, Zhu Senhua, Xu Ting, Yang Ruijie

机构信息

Department of Radiation Oncology, Peking University Third Hospital, Beijing, China.

Cancer Center, Beijing Luhe Hospital, Capital Medical University, Beijing, China.

出版信息

J Radiat Res. 2021 Jan 1;62(1):94-103. doi: 10.1093/jrr/rraa094.

Abstract

For deep learning networks used to segment organs at risk (OARs) in head and neck (H&N) cancers, the class-imbalance problem between small volume OARs and whole computed tomography (CT) images results in delineation with serious false-positives on irrelevant slices and unnecessary time-consuming calculations. To alleviate this problem, a slice classification model-facilitated 3D encoder-decoder network was developed and validated. In the developed two-step segmentation model, a slice classification model was firstly utilized to classify CT slices into six categories in the craniocaudal direction. Then the target categories for different OARs were pushed to the different 3D encoder-decoder segmentation networks, respectively. All the patients were divided into training (n = 120), validation (n = 30) and testing (n = 20) datasets. The average accuracy of the slice classification model was 95.99%. The Dice similarity coefficient and 95% Hausdorff distance, respectively, for each OAR were as follows: right eye (0.88 ± 0.03 and 1.57 ± 0.92 mm), left eye (0.89 ± 0.03 and 1.35 ± 0.43 mm), right optic nerve (0.72 ± 0.09 and 1.79 ± 1.01 mm), left optic nerve (0.73 ± 0.09 and 1.60 ± 0.71 mm), brainstem (0.87 ± 0.04 and 2.28 ± 0.99 mm), right temporal lobe (0.81 ± 0.12 and 3.28 ± 2.27 mm), left temporal lobe (0.82 ± 0.09 and 3.73 ± 2.08 mm), right temporomandibular joint (0.70 ± 0.13 and 1.79 ± 0.79 mm), left temporomandibular joint (0.70 ± 0.16 and 1.98 ± 1.48 mm), mandible (0.89 ± 0.02 and 1.66 ± 0.51 mm), right parotid (0.77 ± 0.07 and 7.30 ± 4.19 mm) and left parotid (0.71 ± 0.12 and 8.41 ± 4.84 mm). The total segmentation time was 40.13 s. The 3D encoder-decoder network facilitated by the slice classification model demonstrated superior performance in accuracy and efficiency in segmenting OARs in H&N CT images. This may significantly reduce the workload for radiation oncologists.

摘要

对于用于对头颈部(H&N)癌中的危及器官(OARs)进行分割的深度学习网络,小体积OARs与整个计算机断层扫描(CT)图像之间的类别不平衡问题导致在不相关切片上出现严重的假阳性分割以及不必要的耗时计算。为缓解这一问题,开发并验证了一种由切片分类模型辅助的3D编码器 - 解码器网络。在开发的两步分割模型中,首先利用切片分类模型将CT切片在头足方向上分为六类。然后将不同OARs的目标类别分别推送至不同的3D编码器 - 解码器分割网络。所有患者被分为训练集(n = 120)、验证集(n = 30)和测试集(n = 20)。切片分类模型的平均准确率为95.99%。每个OAR的骰子相似系数和95%豪斯多夫距离分别如下:右眼(0.88±0.03和1.57±0.92毫米)、左眼(0.89±0.03和1.35±0.43毫米)、右侧视神经(0.72±0.09和1.79±1.01毫米)、左侧视神经(0.73±0.09和1.60±0.71毫米)、脑干(0.87±0.04和2.28±0.99毫米)、右侧颞叶(0.81±0.12和3.28±2.27毫米)、左侧颞叶(0.82±0.09和3.73±2.08毫米)、右侧颞下颌关节(0.70±0.13和1.79±0.79毫米)、左侧颞下颌关节(0.70±0.16和1.98±1.48毫米)、下颌骨(0.89±0.02和1.66±0.51毫米)、右侧腮腺(0.77±0.07和7.30±4.19毫米)和左侧腮腺(0.71±0.12和8.41±4.84毫米)。总分割时间为40.13秒。由切片分类模型辅助的3D编码器 - 解码器网络在分割H&N CT图像中的OARs时,在准确性和效率方面表现出卓越性能。这可能会显著减轻放射肿瘤学家的工作量。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3189/7779351/0a9a3d1a14e9/rraa094f1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验