Suppr超能文献

多模态自注意力感知深度网络用于 3D 生物医学分割。

Multi-modality self-attention aware deep network for 3D biomedical segmentation.

机构信息

Faculty of information technology, Beijing University of Technology, Beijing, China.

Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China.

出版信息

BMC Med Inform Decis Mak. 2020 Jul 9;20(Suppl 3):119. doi: 10.1186/s12911-020-1109-0.

Abstract

BACKGROUND

Deep learning based on segmentation models have been gradually applied in biomedical images and achieved state-of-the-art performance for 3D biomedical segmentation. However, most of existing biomedical segmentation researches take account of the application cases with adapting a single type of medical images from the corresponding examining method. Considering of practical clinic application of the radiology examination for diseases, the multiple image examination methods are normally required for final diagnosis especially in some severe diseases like cancers. Therefore, by considering the cases of employing multi-modal images and exploring the effective multi-modality fusion based on deep networks, we do the research to make full use of complementary information of multi-modal images referring to the clinic experiences of radiologists in image analysis.

METHODS

Referring to the human radiologist diagnosis experience, we discuss and propose a new self-attention aware mechanism to improve the segmentation performance by paying the different attention on different modal images and different symptoms. Firstly, we propose a multi-path encoder and decoder deep network for 3D biomedical segmentation. Secondly, to leverage the complementary information among different modalities, we introduce a structure of attention mechanism called the Multi-Modality Self-Attention Aware (MMSA) convolution. Multi-modal images we used in the paper are different modalities of MR scanning images, which are input into the network separately. Then self-attention weight fusion of multi-modal features is performed with our proposed MMSA, which can adaptively adjust the fusion weights according to the learned contribution degree of different modalities and different features revealing the different symptoms from the labeled data.

RESULTS

Experiments have been done on the public competition dataset BRATS-2015. The results show that our proposed method achieves dice scores of 0.8726, 0.6563, 0.8313 for the whole tumor, the tumor core and the enhancing tumor core, respectively. Comparing with the U-Net with SE block, the scores are increased by 0.0212,0.031,0.0304.

CONCLUSIONS

We present a multi-modality self-attention aware convolution, which have better segmentation results based on the adaptive weighting fusion mechanism with exploiting the multiple medical image modalities. Experimental results demonstrate the effectiveness of our method and prominent application in the multi-modality fusion based medical image analysis.

摘要

背景

基于分割模型的深度学习已逐渐应用于生物医学图像,并在 3D 生物医学分割方面取得了最新的性能。然而,大多数现有的生物医学分割研究都考虑了从相应的检查方法适应单一类型医学图像的应用案例。考虑到放射学检查在疾病中的实际临床应用,通常需要进行多种图像检查方法,特别是在癌症等严重疾病的情况下。因此,通过考虑采用多模态图像的案例,并基于深度网络探索有效的多模态融合,我们根据放射科医生在图像分析方面的临床经验,充分利用多模态图像的互补信息进行研究。

方法

参考人类放射科医生的诊断经验,我们讨论并提出了一种新的自注意感知机制,通过对不同模态图像和不同症状给予不同的关注来提高分割性能。首先,我们提出了一种用于 3D 生物医学分割的多路径编码器和解码器深度网络。其次,为了利用不同模态之间的互补信息,我们引入了一种称为多模态自注意感知(MMSA)卷积的注意力机制结构。本文中使用的多模态图像是不同模态的磁共振扫描图像,它们分别输入到网络中。然后,使用我们提出的 MMSA 对多模态特征进行自注意力权重融合,根据从标记数据中学习到的不同模态和不同特征的贡献程度自适应调整融合权重,从而揭示不同症状。

结果

我们在公共竞赛数据集 BRATS-2015 上进行了实验。结果表明,我们提出的方法在整个肿瘤、肿瘤核心和增强肿瘤核心方面的 Dice 得分分别为 0.8726、0.6563 和 0.8313。与具有 SE 块的 U-Net 相比,得分分别提高了 0.0212、0.031 和 0.0304。

结论

我们提出了一种多模态自注意感知卷积,通过利用自适应加权融合机制,根据从标记数据中学习到的不同模态和不同特征的贡献程度自适应调整融合权重,从而揭示不同症状,从而在基于多模态融合的医学图像分析中取得了更好的分割结果。实验结果证明了我们方法的有效性和在多模态融合医学图像分析中的突出应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cfcc/7346322/e3f8332b2de1/12911_2020_1109_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验