Suppr超能文献

基于自动化 3D U-net 的新生儿脑室内 3D 超声图像分割。

Automated 3D U-net based segmentation of neonatal cerebral ventricles from 3D ultrasound images.

机构信息

School of Engineering, University of Guelph, Guelph, Ontario, Canada.

Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, University of Western Ontario, London, Ontario, Canada.

出版信息

Med Phys. 2022 Feb;49(2):1034-1046. doi: 10.1002/mp.15432. Epub 2022 Jan 12.

Abstract

BACKGROUND

Intraventricular hemorrhaging (IVH) within cerebral lateral ventricles affects 20-30% of very low birth weight infants (<1500 g). As the ventricles increase in size, the intracranial pressure increases, leading to post-hemorrhagic ventricle dilatation (PHVD), an abnormal enlargement of the head. The most widely used imaging tool for measuring IVH and PHVD is cranial two-dimensional (2D) ultrasound (US). Estimating volumetric changes over time with 2D US is unreliable due to high user variability when locating the same anatomical location at different scanning sessions. Compared to 2D US, three-dimensional (3D) US is more sensitive to volumetric changes in the ventricles and does not suffer from variability in slice acquisition. However, 3D US images require segmentation of the ventricular surface, which is tedious and time-consuming when done manually.

PURPOSE

A fast, automated ventricle segmentation method for 3D US would provide quantitative information in a timely manner when monitoring IVH and PHVD in pre-term neonates. To this end, we developed a fast and fully automated segmentation method to segment neonatal cerebral lateral ventricles from 3D US images using deep learning.

METHODS

Our method consists of a 3D U-Net ensemble model composed of three U-Net variants, each highlighting various aspects of the segmentation task such as the shape and boundary of the ventricles. The ensemble is made of a U-Net++, attention U-Net, and U-Net with a deep learning-based shape prior combined using a mean voting strategy. We used a dataset consisting of 190 3D US images, which was separated into two subsets, one set of 87 images contained both ventricles, and one set of 103 images contained only one ventricle (caused by limited field-of-view during acquisition). We conducted fivefold cross-validation to evaluate the performance of the models on a larger amount of test data; 165 test images of which 75 have two ventricles (two-ventricle images) and 90 have one ventricle (one-ventricle images). We compared these results to each stand-alone model and to previous works including, 2D multiplane U-Net and 2D SegNet models.

RESULTS

Using fivefold cross-validation, the ensemble method reported a Dice similarity coefficient (DSC) of 0.720 ± 0.074, absolute volumetric difference (VD) of 3.7 ± 4.1 cm , and a mean absolute surface distance (MAD) of 1.14 ± 0.41 mm on 75 two-ventricle test images. Using 90 test images with a single ventricle, the model after cross-validation reported DSC, VD, and MAD values of 0.806 ± 0.111, 3.5 ± 2.9 cm , and 1.37 ± 1.70 mm, respectively. Compared to alternatives, the proposed ensemble yielded a higher accuracy in segmentation on both test data sets. Our method required approximately 5 s to segment one image and was substantially faster than the state-of-the-art conventional methods.

CONCLUSIONS

Compared to the state-of-the-art non-deep learning methods, our method based on deep learning was more efficient in segmenting neonatal cerebral lateral ventricles from 3D US images with comparable or better DSC, VD, and MAD performance. Our dataset was the largest to date (190 images) for this segmentation problem and the first to segment images that show only one lateral cerebral ventricle.

摘要

背景

脑室内出血(IVH)发生于脑室内部,影响了 20-30%的极低体重儿(<1500g)。随着脑室增大,颅内压升高,导致脑室内出血后扩张(PHVD),即头围异常增大。目前最广泛用于测量 IVH 和 PHVD 的影像学工具是颅二维(2D)超声(US)。由于在不同的扫描阶段,当定位相同的解剖位置时,用户的差异较大,因此使用 2D US 估计体积随时间的变化是不可靠的。与 2D US 相比,三维(3D)US 对脑室的体积变化更敏感,并且不受切片获取的变化影响。然而,3D US 图像需要对心室表面进行分割,手动分割时既繁琐又耗时。

目的

为了在监测早产儿 IVH 和 PHVD 时及时提供定量信息,我们开发了一种快速、自动化的 3D US 心室分割方法。为此,我们使用深度学习开发了一种快速且完全自动化的分割方法,从 3D US 图像中分割新生儿的侧脑室。

方法

我们的方法由一个由三个 U-Net 变体组成的 3D U-Net 集成模型组成,每个变体都突出了分割任务的不同方面,例如脑室的形状和边界。集成模型由 U-Net++、注意力 U-Net 和基于深度学习的形状先验的 U-Net 组成,使用均值投票策略进行组合。我们使用了一个由 190 个 3D US 图像组成的数据集,该数据集分为两个子集,一个子集包含两个脑室,另一个子集只包含一个脑室(由于采集时的视场有限)。我们进行了五折交叉验证,以评估模型在更大数量的测试数据上的性能;其中有 165 个测试图像,75 个图像有两个脑室(双脑室图像),90 个图像有一个脑室(单脑室图像)。我们将这些结果与每个独立的模型以及包括 2D 多平面 U-Net 和 2D SegNet 模型在内的之前的工作进行了比较。

结果

使用五折交叉验证,集成方法在 75 个双脑室测试图像上报告了 0.720±0.074 的 Dice 相似系数(DSC)、3.7±4.1cm 的绝对体积差异(VD)和 1.14±0.41mm 的平均绝对表面距离(MAD)。在 90 个有单个脑室的测试图像上,经过交叉验证的模型报告的 DSC、VD 和 MAD 值分别为 0.806±0.111、3.5±2.9cm 和 1.37±1.70mm。与替代方案相比,所提出的集成方法在两个测试数据集上的分割准确性更高。我们的方法分割一个图像大约需要 5 秒,比最先进的传统方法快得多。

结论

与最先进的非深度学习方法相比,我们基于深度学习的方法在分割新生儿侧脑室方面更有效,在 DSC、VD 和 MAD 性能方面具有可比性或更好的性能。我们的数据集是目前为止(190 张图像)用于此分割问题的最大数据集,也是第一个分割仅显示一个侧脑室的图像的数据集。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验