Suppr超能文献

通过多模态交互式特征学习进行脑肿瘤分割

Brain Tumor Segmentation via Multi-Modalities Interactive Feature Learning.

作者信息

Wang Bo, Yang Jingyi, Peng Hong, Ai Jingyang, An Lihua, Yang Bo, You Zheng, Ma Lin

机构信息

The State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing, China.

Beijing Jingzhen Medical Technology Ltd., Beijing, China.

出版信息

Front Med (Lausanne). 2021 May 13;8:653925. doi: 10.3389/fmed.2021.653925. eCollection 2021.

Abstract

Automatic segmentation of brain tumors from multi-modalities magnetic resonance image data has the potential to enable preoperative planning and intraoperative volume measurement. Recent advances in deep convolutional neural network technology have opened up an opportunity to achieve end-to-end segmenting the brain tumor areas. However, the medical image data used in brain tumor segmentation are relatively scarce and the appearance of brain tumors is varied, so that it is difficult to find a learnable pattern to directly describe tumor regions. In this paper, we propose a novel cross-modalities interactive feature learning framework to segment brain tumors from the multi-modalities data. The core idea is that the multi-modality MR data contain rich patterns of the normal brain regions, which can be easily captured and can be potentially used to detect the non-normal brain regions, i.e., brain tumor regions. The proposed multi-modalities interactive feature learning framework consists of two modules: cross-modality feature extracting module and attention guided feature fusing module, which aim at exploring the rich patterns cross multi-modalities and guiding the interacting and the fusing process for the rich features from different modalities. Comprehensive experiments are conducted on the BraTS 2018 benchmark, which show that the proposed cross-modality feature learning framework can effectively improve the brain tumor segmentation performance when compared with the baseline methods and state-of-the-art methods.

摘要

从多模态磁共振图像数据中自动分割脑肿瘤,有潜力实现术前规划和术中体积测量。深度卷积神经网络技术的最新进展为实现脑肿瘤区域的端到端分割提供了契机。然而,用于脑肿瘤分割的医学图像数据相对稀缺,且脑肿瘤的外观各异,因此难以找到可学习的模式来直接描述肿瘤区域。在本文中,我们提出了一种新颖的跨模态交互特征学习框架,用于从多模态数据中分割脑肿瘤。核心思想是多模态磁共振数据包含正常脑区的丰富模式,这些模式易于捕捉,并且有可能用于检测非正常脑区,即脑肿瘤区域。所提出的跨模态交互特征学习框架由两个模块组成:跨模态特征提取模块和注意力引导特征融合模块,旨在探索跨多模态的丰富模式,并指导来自不同模态的丰富特征的交互和融合过程。在BraTS 2018基准上进行了全面实验,结果表明,与基线方法和现有方法相比,所提出的跨模态特征学习框架能够有效提高脑肿瘤分割性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a22/8158657/d9ef81f8552b/fmed-08-653925-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验