Suppr超能文献

多模态磁共振成像中脑肿瘤分割的像素级和特征级图像融合。

Brain tumor segmentation in multimodal MRI pixel-level and feature-level image fusion.

作者信息

Liu Yu, Mu Fuhao, Shi Yu, Cheng Juan, Li Chang, Chen Xun

机构信息

Department of Biomedical Engineering, Hefei University of Technology, Hefei, China.

Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, Hefei University of Technology, Hefei, China.

出版信息

Front Neurosci. 2022 Sep 14;16:1000587. doi: 10.3389/fnins.2022.1000587. eCollection 2022.

Abstract

Brain tumor segmentation in multimodal MRI volumes is of great significance to disease diagnosis, treatment planning, survival prediction and other relevant tasks. However, most existing brain tumor segmentation methods fail to make sufficient use of multimodal information. The most common way is to simply stack the original multimodal images or their low-level features as the model input, and many methods treat each modality data with equal importance to a given segmentation target. In this paper, we introduce multimodal image fusion technique including both pixel-level fusion and feature-level fusion for brain tumor segmentation, aiming to achieve more sufficient and finer utilization of multimodal information. At the pixel level, we present a convolutional network named PIF-Net for 3D MR image fusion to enrich the input modalities of the segmentation model. The fused modalities can strengthen the association among different types of pathological information captured by multiple source modalities, leading to a modality enhancement effect. At the feature level, we design an attention-based modality selection feature fusion (MSFF) module for multimodal feature refinement to address the difference among multiple modalities for a given segmentation target. A two-stage brain tumor segmentation framework is accordingly proposed based on the above components and the popular V-Net model. Experiments are conducted on the BraTS 2019 and BraTS 2020 benchmarks. The results demonstrate that the proposed components on both pixel-level and feature-level fusion can effectively improve the segmentation accuracy of brain tumors.

摘要

在多模态磁共振成像(MRI)体积数据中进行脑肿瘤分割对于疾病诊断、治疗规划、生存预测及其他相关任务具有重要意义。然而,大多数现有的脑肿瘤分割方法未能充分利用多模态信息。最常见的方法是简单地将原始多模态图像或其低级特征堆叠作为模型输入,并且许多方法对给定的分割目标同等重视各模态数据。在本文中,我们引入了包括像素级融合和特征级融合的多模态图像融合技术用于脑肿瘤分割,旨在更充分、更精细地利用多模态信息。在像素级别,我们提出了一种名为PIF-Net的卷积网络用于3D MR图像融合,以丰富分割模型的输入模态。融合后的模态可以加强多个源模态捕获的不同类型病理信息之间的关联,从而产生模态增强效果。在特征级别,我们设计了一个基于注意力的模态选择特征融合(MSFF)模块用于多模态特征细化,以解决给定分割目标的多个模态之间的差异。基于上述组件和流行的V-Net模型,相应地提出了一个两阶段脑肿瘤分割框架。在BraTS 2019和BraTS 2020基准上进行了实验。结果表明,所提出的像素级和特征级融合组件都能有效提高脑肿瘤的分割精度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9bb/9515796/3e52d79d3157/fnins-16-1000587-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验