Suppr超能文献

基于银标准掩模的磁共振脑成像中颅骨剥离的卷积神经网络。

Convolutional neural networks for skull-stripping in brain MR imaging using silver standard masks.

机构信息

School of Biomedical Engineering and Imaging Sciences, King's College London, UK; Medical Image Computing Laboratory (MICLab), Department of Computer Engineering and Industrial Automation, School of Electrical and Computer Engineering, University of Campinas, Campinas, São Paulo, Brazil.

Departments of Radiology and Clinical Neurosciences, Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada; Seaman Family Magnetic Resonance Research Centre, Foothills Medical Centre, Alberta Health Services, Calgary, Alberta, Canada.

出版信息

Artif Intell Med. 2019 Jul;98:48-58. doi: 10.1016/j.artmed.2019.06.008. Epub 2019 Jul 23.

Abstract

Manual annotation is considered to be the "gold standard" in medical imaging analysis. However, medical imaging datasets that include expert manual segmentation are scarce as this step is time-consuming, and therefore expensive. Moreover, single-rater manual annotation is most often used in data-driven approaches making the network biased to only that single expert. In this work, we propose a CNN for brain extraction in magnetic resonance (MR) imaging, that is fully trained with what we refer to as "silver standard" masks. Therefore, eliminating the cost associated with manual annotation. Silver standard masks are generated by forming the consensus from a set of eight, public, non-deep-learning-based brain extraction methods using the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm. Our method consists of (1) developing a dataset with "silver standard" masks as input, and implementing (2) a tri-planar method using parallel 2D U-Net-based convolutional neural networks (CNNs) (referred to as CONSNet). This term refers to our integrated approach, i.e., training with silver standard masks and using a 2D U-Net-based architecture. We conducted our analysis using three public datasets: the Calgary-Campinas-359 (CC-359), the LONI Probabilistic Brain Atlas (LPBA40), and the Open Access Series of Imaging Studies (OASIS). Five performance metrics were used in our experiments: Dice coefficient, sensitivity, specificity, Hausdorff distance, and symmetric surface-to-surface mean distance. Our results showed that we outperformed (i.e., larger Dice coefficients) the current state-of-the-art skull-stripping methods without using gold standard annotation for the CNNs training stage. CONSNet is the first deep learning approach that is fully trained using silver standard data and is, thus, more generalizable. Using these masks, we eliminate the cost of manual annotation, decreased inter-/intra-rater variability, and avoided CNN segmentation overfitting towards one specific manual annotation guideline that can occur when gold standard masks are used. Moreover, once trained, our method takes few seconds to process a typical brain image volume using modern a high-end GPU. In contrast, many of the other competitive methods have processing times in the order of minutes.

摘要

手动标注被认为是医学影像分析中的“金标准”。然而,包含专家手动分割的医学影像数据集非常稀缺,因为这一步耗时且昂贵。此外,数据驱动方法中最常使用单评分者手动标注,这使得网络偏向于仅接受单一专家的标注。在这项工作中,我们提出了一种用于磁共振成像(MR)脑分割的卷积神经网络(CNN),该网络完全使用我们所谓的“银标准”掩模进行训练。因此,消除了与手动标注相关的成本。银标准掩模是通过使用 Simultaneous Truth and Performance Level Estimation(STAPLE)算法,从一组八个公共的、非深度学习的脑分割方法中形成共识来生成的。我们的方法包括:(1)使用“银标准”掩模作为输入开发数据集,以及(2)使用平行的二维基于 U-Net 的卷积神经网络(CNN)(称为 CONSNet)实现三平面方法。这个术语指的是我们的综合方法,即使用银标准掩模进行训练,并使用基于 2D U-Net 的架构。我们使用三个公共数据集进行了分析:卡尔加里-坎皮纳斯-359(CC-359)、伦敦尼概率脑图谱(LPBA40)和开放获取成像研究系列(OASIS)。我们的实验使用了五个性能指标:Dice 系数、灵敏度、特异性、Hausdorff 距离和对称表面到表面平均距离。我们的结果表明,与不使用金标准标注进行 CNN 训练阶段的当前最先进的颅骨剥离方法相比,我们的方法表现更好(即更大的 Dice 系数)。CONSNet 是第一个完全使用银标准数据进行训练的深度学习方法,因此更具通用性。使用这些掩模,我们消除了手动标注的成本,降低了内部/外部评分者的变异性,并避免了当使用金标准掩模时,CNN 分割可能对特定的手动标注指南产生的过度拟合。此外,一旦训练完成,我们的方法使用现代高端 GPU 处理一个典型的脑图像体积只需要几秒钟。相比之下,许多其他竞争方法的处理时间都在几分钟左右。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验