Suppr超能文献

基于深度学习的鼻咽癌 CT 图像中危及器官的检测与分割用于放射治疗计划。

Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning.

机构信息

Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China.

Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China.

出版信息

Eur Radiol. 2019 Apr;29(4):1961-1967. doi: 10.1007/s00330-018-5748-9. Epub 2018 Oct 9.

Abstract

OBJECTIVE

Accurate detection and segmentation of organs at risks (OARs) in CT image is the key step for efficient planning of radiation therapy for nasopharyngeal carcinoma (NPC) treatment. We develop a fully automated deep-learning-based method (termed organs-at-risk detection and segmentation network (ODS net)) on CT images and investigate ODS net performance in automated detection and segmentation of OARs.

METHODS

The ODS net consists of two convolutional neural networks (CNNs). The first CNN proposes organ bounding boxes along with their scores, and then a second CNN utilizes the proposed bounding boxes to predict segmentation masks for each organ. A total of 185 subjects were included in this study for statistical comparison. Sensitivity and specificity were performed to determine the performance of the detection and the Dice coefficient was used to quantitatively measure the overlap between automated segmentation results and manual segmentation. Paired samples t tests and analysis of variance were employed for statistical analysis.

RESULTS

ODS net provides an accurate detection result with a sensitivity of 0.997 to 1 for most organs and a specificity of 0.983 to 0.999. Furthermore, segmentation results from ODS net correlated strongly with manual segmentation with a Dice coefficient of more than 0.85 in most organs. A significantly higher Dice coefficient for all organs together (p = 0.0003 < 0.01) was obtained in ODS net (0.861 ± 0.07) than in fully convolutional neural network (FCN) (0.8 ± 0.07). The Dice coefficients of each OAR did not differ significantly between different T-staging patients.

CONCLUSION

The ODS net yielded accurate automated detection and segmentation of OARs in CT images and thereby may improve and facilitate radiotherapy planning for NPC.

KEY POINTS

• A fully automated deep-learning method (ODS net) is developed to detect and segment OARs in clinical CT images. • This deep-learning-based framework produces reliable detection and segmentation results and thus can be useful in delineating OARs in NPC radiotherapy planning. • This deep-learning-based framework delineating a single image requires approximately 30 s, which is suitable for clinical workflows.

摘要

目的

在 CT 图像中准确检测和分割危及器官(OARs)是鼻咽癌(NPC)调强放疗计划的关键步骤。我们在 CT 图像上开发了一种基于深度学习的全自动方法(称为器官风险检测和分割网络(ODS net)),并研究了 ODS net 在 OAR 自动检测和分割中的性能。

方法

ODS net 由两个卷积神经网络(CNN)组成。第一个 CNN 提出器官边界框及其分数,然后第二个 CNN 利用所提出的边界框预测每个器官的分割掩码。共有 185 名受试者纳入本研究进行统计比较。采用灵敏度和特异性来确定检测性能,并用 Dice 系数定量测量自动分割结果与手动分割之间的重叠程度。采用配对样本 t 检验和方差分析进行统计学分析。

结果

ODS net 对大多数器官的检测结果准确,敏感性为 0.997 至 1,特异性为 0.983 至 0.999。此外,ODS net 的分割结果与手动分割高度相关,大多数器官的 Dice 系数均大于 0.85。在 ODS net(0.861±0.07)中,所有器官的 Dice 系数均显著高于全卷积神经网络(FCN)(0.8±0.07)(p=0.0003<0.01)。不同 T 分期患者的每个 OAR 的 Dice 系数无显著差异。

结论

ODS net 实现了 CT 图像中 OAR 的自动精确检测和分割,从而可能改善和促进 NPC 的放疗计划。

关键要点

  1. 开发了一种全自动深度学习方法(ODS net),用于检测和分割临床 CT 图像中的 OARs。

  2. 该基于深度学习的框架可提供可靠的检测和分割结果,因此可用于 NPC 放疗计划中 OAR 的勾画。

  3. 该基于深度学习的框架勾画单个图像大约需要 30 秒,适合临床工作流程。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验