Suppr超能文献

使用裁剪的 3D 图像进行胸部危险器官分割的深度卷积神经网络。

Deep convolutional neural network for segmentation of thoracic organs-at-risk using cropped 3D images.

机构信息

Department of Biomedical Engineering, University of Virginia, Charlottesville, VA, 22903, USA.

Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, VA, 22903, USA.

出版信息

Med Phys. 2019 May;46(5):2169-2180. doi: 10.1002/mp.13466. Epub 2019 Mar 21.

Abstract

PURPOSE

Automatic segmentation of organs-at-risk (OARs) is a key step in radiation treatment planning to reduce human efforts and bias. Deep convolutional neural networks (DCNN) have shown great success in many medical image segmentation applications but there are still challenges in dealing with large 3D images for optimal results. The purpose of this study is to develop a novel DCNN method for thoracic OARs segmentation using cropped 3D images.

METHODS

To segment the five organs (left and right lungs, heart, esophagus and spinal cord) from the thoracic CT scans, preprocessing to unify the voxel spacing and intensity was first performed, a 3D U-Net was then trained on resampled thoracic images to localize each organ, then the original images were cropped to only contain one organ and served as the input to each individual organ segmentation network. The segmentation maps were then merged to get the final results. The network structures were optimized for each step, as well as the training and testing strategies. A novel testing augmentation with multiple iterations of image cropping was used. The networks were trained on 36 thoracic CT scans with expert annotations provided by the organizers of the 2017 AAPM Thoracic Auto-segmentation Challenge and tested on the challenge testing dataset as well as a private dataset.

RESULTS

The proposed method earned second place in the live phase of the challenge and first place in the subsequent ongoing phase using a newly developed testing augmentation approach. It showed superior-than-human performance on average in terms of Dice scores (spinal cord: 0.893 ± 0.044, right lung: 0.972 ± 0.021, left lung: 0.979 ± 0.008, heart: 0.925 ± 0.015, esophagus: 0.726 ± 0.094), mean surface distance (spinal cord: 0.662 ± 0.248 mm, right lung: 0.933 ± 0.574 mm, left lung: 0.586 ± 0.285 mm, heart: 2.297 ± 0.492 mm, esophagus: 2.341 ± 2.380 mm) and 95% Hausdorff distance (spinal cord: 1.893 ± 0.627 mm, right lung: 3.958 ± 2.845 mm, left lung: 2.103 ± 0.938 mm, heart: 6.570 ± 1.501 mm, esophagus: 8.714 ± 10.588 mm). It also achieved good performance in the private dataset and reduced the editing time to 7.5 min per patient following automatic segmentation.

CONCLUSIONS

The proposed DCNN method demonstrated good performance in automatic OAR segmentation from thoracic CT scans. It has the potential for eventual clinical adoption of deep learning in radiation treatment planning due to improved accuracy and reduced cost for OAR segmentation.

摘要

目的

器官危险区(OARs)的自动分割是放射治疗计划的关键步骤,可减少人为干预和偏倚。深度卷积神经网络(DCNN)在许多医学图像分割应用中取得了巨大成功,但在处理大型 3D 图像以获得最佳结果方面仍存在挑战。本研究旨在开发一种新的基于裁剪 3D 图像的胸部 OAR 分割的 DCNN 方法。

方法

为了从胸部 CT 扫描中分割出五个器官(左、右肺、心脏、食管和脊髓),首先进行了统一体素间距和强度的预处理,然后在重新采样的胸部图像上训练 3D U-Net 以定位每个器官,然后裁剪原始图像仅包含一个器官,并将其作为每个单独器官分割网络的输入。然后合并分割图以获得最终结果。针对每个步骤、训练和测试策略优化了网络结构。使用多次裁剪图像的新测试增强方法。该网络在由 2017 年 AAPM 胸部自动分割挑战赛组织者提供的专家注释的 36 个胸部 CT 扫描上进行了训练,并在挑战赛测试数据集和私人数据集上进行了测试。

结果

该方法在挑战赛的现场阶段获得第二名,在随后的持续阶段使用新开发的测试增强方法获得第一名。它在 Dice 评分(脊髓:0.893±0.044、右肺:0.972±0.021、左肺:0.979±0.008、心脏:0.925±0.015、食管:0.726±0.094)、平均表面距离(脊髓:0.662±0.248mm、右肺:0.933±0.574mm、左肺:0.586±0.285mm、心脏:2.297±0.492mm、食管:2.341±2.380mm)和 95%Hausdorff 距离(脊髓:1.893±0.627mm、右肺:3.958±2.845mm、左肺:2.103±0.938mm、心脏:6.570±1.501mm、食管:8.714±10.588mm)方面均表现出优异的性能。它在私人数据集上也取得了良好的效果,并在自动分割后将每个患者的编辑时间减少到 7.5 分钟。

结论

所提出的 DCNN 方法在自动分割胸部 CT 扫描中的 OAR 方面表现出良好的性能。由于 OAR 分割的准确性提高和成本降低,它有可能在放射治疗计划中最终采用深度学习。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验