Suppr超能文献

用于脑肿瘤分割并带有不确定性估计的非对称U-Net模型的非对称集成

Asymmetric Ensemble of Asymmetric U-Net Models for Brain Tumor Segmentation With Uncertainty Estimation.

作者信息

Rosas-Gonzalez Sarahi, Birgui-Sekou Taibou, Hidane Moncef, Zemmoura Ilyess, Tauber Clovis

机构信息

UMR Inserm U1253, iBrain, Université de Tours, Inserm, Tours, France.

LIFAT EA 6300, INSA Centre Val de Loire, Université de Tours, Tours, France.

出版信息

Front Neurol. 2021 Sep 30;12:609646. doi: 10.3389/fneur.2021.609646. eCollection 2021.

Abstract

Accurate brain tumor segmentation is crucial for clinical assessment, follow-up, and subsequent treatment of gliomas. While convolutional neural networks (CNN) have become state of the art in this task, most proposed models either use 2D architectures ignoring 3D contextual information or 3D models requiring large memory capacity and extensive learning databases. In this study, an ensemble of two kinds of U-Net-like models based on both 3D and 2.5D convolutions is proposed to segment multimodal magnetic resonance images (MRI). The 3D model uses concatenated data in a modified U-Net architecture. In contrast, the 2.5D model is based on a multi-input strategy to extract low-level features from each modality independently and on a new 2.5D Multi-View Inception block that aims to merge features from different views of a 3D image aggregating multi-scale features. The Asymmetric Ensemble of Asymmetric U-Net (AE AU-Net) based on both is designed to find a balance between increasing multi-scale and 3D contextual information extraction and keeping memory consumption low. Experiments on 2019 dataset show that our model improves enhancing tumor sub-region segmentation. Overall, performance is comparable with state-of-the-art results, although with less learning data or memory requirements. In addition, we provide voxel-wise and structure-wise uncertainties of the segmentation results, and we have established qualitative and quantitative relationships between uncertainty and prediction errors. Dice similarity coefficient for the whole tumor, tumor core, and tumor enhancing regions on BraTS 2019 validation dataset were 0.902, 0.815, and 0.773. We also applied our method in BraTS 2018 with corresponding Dice score values of 0.908, 0.838, and 0.800.

摘要

准确的脑肿瘤分割对于胶质瘤的临床评估、随访及后续治疗至关重要。虽然卷积神经网络(CNN)已成为这项任务的先进技术,但大多数提出的模型要么使用忽略3D上下文信息的2D架构,要么使用需要大内存容量和大量学习数据库的3D模型。在本研究中,提出了一种基于3D和2.5D卷积的两种类似U-Net模型的集成方法,用于分割多模态磁共振图像(MRI)。3D模型在改进的U-Net架构中使用拼接数据。相比之下,2.5D模型基于多输入策略,独立地从每个模态提取低级特征,并基于一个新的2.5D多视图Inception块,该块旨在合并来自3D图像不同视图的特征,聚合多尺度特征。基于两者的非对称U-Net非对称集成(AE AU-Net)旨在在增加多尺度和3D上下文信息提取与保持低内存消耗之间找到平衡。在2019数据集上的实验表明,我们的模型改进了增强肿瘤子区域的分割。总体而言,性能与最先进的结果相当,尽管学习数据或内存需求较少。此外,我们提供了分割结果的体素级和结构级不确定性,并建立了不确定性与预测误差之间的定性和定量关系。在BraTS 2019验证数据集上,整个肿瘤、肿瘤核心和肿瘤增强区域的骰子相似系数分别为0.902、0.815和0.773。我们还将我们的方法应用于BraTS 2018,相应的骰子分数值分别为0.908、0.838和0.800。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8406/8515181/c0716c396a23/fneur-12-609646-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验