Suppr超能文献

基于 atlas 和深度学习的肝癌器官结构自动分割的临床对比评估。

Comparative clinical evaluation of atlas and deep-learning-based auto-segmentation of organ structures in liver cancer.

机构信息

Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea.

Peter MacCallum Cancer Centre, Melbourne, VIC, Australia.

出版信息

Radiat Oncol. 2019 Nov 27;14(1):213. doi: 10.1186/s13014-019-1392-z.

Abstract

BACKGROUND

Accurate and standardized descriptions of organs at risk (OARs) are essential in radiation therapy for treatment planning and evaluation. Traditionally, physicians have contoured patient images manually, which, is time-consuming and subject to inter-observer variability. This study aims to a) investigate whether customized, deep-learning-based auto-segmentation could overcome the limitations of manual contouring and b) compare its performance against a typical, atlas-based auto-segmentation method organ structures in liver cancer.

METHODS

On-contrast computer tomography image sets of 70 liver cancer patients were used, and four OARs (heart, liver, kidney, and stomach) were manually delineated by three experienced physicians as reference structures. Atlas and deep learning auto-segmentations were respectively performed with MIM Maestro 6.5 (MIM Software Inc., Cleveland, OH) and, with a deep convolution neural network (DCNN). The Hausdorff distance (HD) and, dice similarity coefficient (DSC), volume overlap error (VOE), and relative volume difference (RVD) were used to quantitatively evaluate the four different methods in the case of the reference set of the four OAR structures.

RESULTS

The atlas-based method yielded the following average DSC and standard deviation values (SD) for the heart, liver, right kidney, left kidney, and stomach: 0.92 ± 0.04 (DSC ± SD), 0.93 ± 0.02, 0.86 ± 0.07, 0.85 ± 0.11, and 0.60 ± 0.13 respectively. The deep-learning-based method yielded corresponding values for the OARs of 0.94 ± 0.01, 0.93 ± 0.01, 0.88 ± 0.03, 0.86 ± 0.03, and 0.73 ± 0.09. The segmentation results show that the deep learning framework is superior to the atlas-based framwork except in the case of the liver. Specifically, in the case of the stomach, the DSC, VOE, and RVD showed a maximum difference of 21.67, 25.11, 28.80% respectively.

CONCLUSIONS

In this study, we demonstrated that a deep learning framework could be used more effectively and efficiently compared to atlas-based auto-segmentation for most OARs in human liver cancer. Extended use of the deep-learning-based framework is anticipated for auto-segmentations of other body sites.

摘要

背景

在放射治疗中,准确和标准化的危及器官(OAR)描述对于治疗计划和评估至关重要。传统上,医生手动勾画患者图像,这既耗时又容易受到观察者间变异性的影响。本研究旨在:a)探讨定制的基于深度学习的自动分割是否可以克服手动勾画的局限性,b)比较其性能与典型的基于图谱的自动分割方法在肝癌中对器官结构的分割效果。

方法

使用了 70 例肝癌患者的增强计算机断层扫描图像集,由三位有经验的医生手动勾画四个 OAR(心脏、肝脏、肾脏和胃)作为参考结构。使用 MIM Maestro 6.5(MIM Software Inc.,克利夫兰,OH)和深度卷积神经网络(DCNN)分别进行图谱和深度学习自动分割。使用 Hausdorff 距离(HD)和骰子相似系数(DSC)、体积重叠误差(VOE)和相对体积差异(RVD)定量评估四种不同方法在四个 OAR 结构参考集的表现。

结果

图谱方法对心、肝、右肾、左肾和胃的平均 DSC 和标准差(SD)值分别为:0.92±0.04(DSC±SD)、0.93±0.02、0.86±0.07、0.85±0.11 和 0.60±0.13。基于深度学习的方法分别得到了心、肝、右肾、左肾和胃的 OARs 的对应值:0.94±0.01、0.93±0.01、0.88±0.03、0.86±0.03 和 0.73±0.09。分割结果表明,深度学习框架除了在肝脏方面优于基于图谱的框架外,在胃方面表现出色。具体来说,在胃方面,DSC、VOE 和 RVD 的最大差异分别为 21.67%、25.11%和 28.80%。

结论

在这项研究中,我们证明了与基于图谱的自动分割相比,深度学习框架在人体肝癌的大多数 OAR 中可以更有效和高效地使用。预计将扩展使用基于深度学习的框架进行其他身体部位的自动分割。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4298/6880380/318fa7f643d3/13014_2019_1392_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验