Suppr超能文献

基于弱监督深度学习方法的快速交互式医学图像分割。

Fast interactive medical image segmentation with weakly supervised deep learning method.

机构信息

ImViA Laboratory, University of Burgundy, Dijon, France.

Radiation Oncology Department, CGFL, Batiment I3M, 64b rue sully, 21000, Dijon, France.

出版信息

Int J Comput Assist Radiol Surg. 2020 Sep;15(9):1437-1444. doi: 10.1007/s11548-020-02223-x. Epub 2020 Jul 11.

Abstract

PURPOSE

To achieve accurate image segmentation, which is the first critical step in medical image analysis and interventions, using deep neural networks seems a promising approach provided sufficiently large and diverse annotated data from experts. However, annotated datasets are often limited because it is prone to variations in acquisition parameters and require high-level expert's knowledge, and manually labeling targets by tracing their contour is often laborious. Developing fast, interactive, and weakly supervised deep learning methods is thus highly desirable.

METHODS

We propose a new efficient deep learning method to accurately segment targets from images while generating an annotated dataset for deep learning methods. It involves a generative neural network-based prior-knowledge prediction from pseudo-contour landmarks. The predicted prior knowledge (i.e., contour proposal) is then refined using a convolutional neural network that leverages the information from the predicted prior knowledge and the raw input image. Our method was evaluated on a clinical database of 145 intraoperative ultrasound and 78 postoperative CT images of image-guided prostate brachytherapy. It was also evaluated on a cardiac multi-structure segmentation from 450 2D echocardiographic images.

RESULTS

Experimental results show that our model can segment the prostate clinical target volume in 0.499 s (i.e., 7.79 milliseconds per image) with an average Dice coefficient of 96.9 ± 0.9% and 95.4 ± 0.9%, 3D Hausdorff distance of 4.25 ± 4.58 and 5.17 ± 1.41 mm, and volumetric overlap ratio of 93.9 ± 1.80% and 91.3 ± 1.70 from TRUS and CT images, respectively. It also yielded an average Dice coefficient of 96.3 ± 1.3% on echocardiographic images.

CONCLUSIONS

We proposed and evaluated a fast, interactive deep learning method for accurate medical image segmentation. Moreover, our approach has the potential to solve the bottleneck of deep learning methods in adapting to inter-clinical variations and speed up the annotation processes.

摘要

目的

在医学图像分析和干预中,实现精确的图像分割是至关重要的第一步,使用深度神经网络似乎是一种很有前途的方法,前提是有足够大和多样化的专家标注数据。然而,标注数据集通常是有限的,因为它容易受到采集参数的变化的影响,并且需要高级专家的知识,而手动跟踪目标轮廓进行标注通常是很繁琐的。因此,开发快速、交互式和弱监督的深度学习方法是非常需要的。

方法

我们提出了一种新的有效的深度学习方法,可以在生成深度学习方法的标注数据集的同时,准确地从图像中分割目标。它涉及到基于生成神经网络的伪轮廓标记的先验知识预测。然后,使用卷积神经网络对预测的先验知识进行细化,该网络利用预测的先验知识和原始输入图像的信息。我们的方法在 145 例术中超声和 78 例图像引导前列腺近距离放射治疗术后 CT 图像的临床数据库上进行了评估。它还在 450 张 2D 超声心动图的心脏多结构分割上进行了评估。

结果

实验结果表明,我们的模型可以在 0.499 秒内(即每张图像 7.79 毫秒)分割前列腺临床靶区,平均 Dice 系数为 96.9 ± 0.9%和 95.4 ± 0.9%,3D Hausdorff 距离为 4.25 ± 4.58 和 5.17 ± 1.41 毫米,体积重叠比为 93.9 ± 1.80%和 91.3 ± 1.70%,分别来自 TRUS 和 CT 图像。它还在超声心动图上获得了平均 Dice 系数为 96.3 ± 1.3%。

结论

我们提出并评估了一种快速、交互式的医学图像分割深度学习方法。此外,我们的方法有可能解决深度学习方法适应临床间变化的瓶颈问题,并加速标注过程。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验