Suppr超能文献

MetricUNet:通过在线采样实现精确前列腺分割的图像和体素协同学习

MetricUNet: Synergistic image- and voxel-level learning for precise prostate segmentation via online sampling.

机构信息

Medical School of Nanjing University, Nanjing, China; National Institute of Healthcare Data Science at Nanjing University, Nanjing, China.

School of Mathematics and Statistics, Xi'an Jiaotong University, Shanxi, China.

出版信息

Med Image Anal. 2021 Jul;71:102039. doi: 10.1016/j.media.2021.102039. Epub 2021 Mar 23.

Abstract

Fully convolutional networks (FCNs), including UNet and VNet, are widely-used network architectures for semantic segmentation in recent studies. However, conventional FCN is typically trained by the cross-entropy or Dice loss, which only calculates the error between predictions and ground-truth labels for pixels individually. This often results in non-smooth neighborhoods in the predicted segmentation. This problem becomes more serious in CT prostate segmentation as CT images are usually of low tissue contrast. To address this problem, we propose a two-stage framework, with the first stage to quickly localize the prostate region, and the second stage to precisely segment the prostate by a multi-task UNet architecture. We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network. Therefore, the proposed network has a dual-branch architecture that tackles two tasks: (1) a segmentation sub-network aiming to generate the prostate segmentation, and (2) a voxel-metric learning sub-network aiming to improve the quality of the learned feature space supervised by a metric loss. Specifically, the voxel-metric learning sub-network samples tuples (including triplets and pairs) in voxel-level through the intermediate feature maps. Unlike conventional deep metric learning methods that generate triplets or pairs in image-level before the training phase, our proposed voxel-wise tuples are sampled in an online manner and operated in an end-to-end fashion via multi-task learning. To evaluate the proposed method, we implement extensive experiments on a real CT image dataset consisting 339 patients. The ablation studies show that our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss. And the comparisons show that the proposed method outperforms the state-of-the-art methods by a reasonable margin.

摘要

全卷积网络(FCNs),包括 UNet 和 VNet,是最近研究中用于语义分割的广泛使用的网络架构。然而,传统的 FCN 通常通过交叉熵或 Dice 损失进行训练,该损失仅计算预测值和地面真实标签之间的单个像素的误差。这通常会导致预测分割中出现不光滑的邻域。在 CT 前列腺分割中,这个问题变得更加严重,因为 CT 图像通常具有较低的组织对比度。为了解决这个问题,我们提出了一个两阶段框架,第一阶段快速定位前列腺区域,第二阶段通过多任务 UNet 架构精确分割前列腺。我们通过多任务网络中的体素级采样引入了一种新的在线度量学习模块。因此,所提出的网络具有双分支架构,可处理两个任务:(1)分割子网络旨在生成前列腺分割,(2)体素度量学习子网络旨在通过度量损失来提高学习特征空间的质量。具体来说,体素度量学习子网络通过中间特征图以体素级采样元组(包括三元组和对)。与传统的深度度量学习方法在训练阶段之前在图像级生成三元组或对不同,我们提出的体素级元组以在线方式采样,并通过多任务学习以端到端方式操作。为了评估所提出的方法,我们在包含 339 名患者的真实 CT 图像数据集上进行了广泛的实验。消融研究表明,与使用交叉熵或 Dice 损失的传统学习方法相比,我们的方法可以有效地学习更具代表性的体素级特征。并且比较表明,所提出的方法以合理的优势优于最新方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验