Suppr超能文献

DMC-Fusion:基于分类器的特征合成的医学多模态图像深度多级联融合。

DMC-Fusion: Deep Multi-Cascade Fusion With Classifier-Based Feature Synthesis for Medical Multi-Modal Images.

出版信息

IEEE J Biomed Health Inform. 2021 Sep;25(9):3438-3449. doi: 10.1109/JBHI.2021.3083752. Epub 2021 Sep 3.

Abstract

Multi-modal medical image fusion is a challenging yet important task for precision diagnosis and surgical planning in clinical practice. Although single feature fusion strategy such as Densefuse has achieved inspiring performance, it tends to be not fully preserved for the source image features. In this paper, a deep multi-fusion framework with classifier-based feature synthesis is proposed to automatically fuse multi-modal medical images. It consists of a pre-trained autoencoder based on dense connections, a feature classifier and a multi-cascade fusion decoder with separately fusing high-frequency and low-frequency. The encoder and decoder are transferred from MS-COCO datasets and pre-trained simultaneously on multi-modal medical image public datasets to extract features. The feature classification is conducted through Gaussian high-pass filtering and the peak signal to noise ratio thresholding, then feature maps in each layer of the pre-trained Dense-Block and decoder are divided into high-frequency and low-frequency sequences. Specifically, in proposed feature fusion block, parameter-adaptive pulse coupled neural network and l-weighted are employed to fuse high-frequency and low-frequency, respectively. Finally, we design a novel multi-cascade fusion decoder on total decoding feature stage to selectively fuse useful information from different modalities. We also validate our approach for the brain disease classification using the fused images, and a statistical significance test is performed to illustrate that the improvement in classification performance is due to the fusion. Experimental results demonstrate that the proposed method achieves the state-of-the-art performance in both qualitative and quantitative evaluations.

摘要

多模态医学图像融合是临床精准诊断和手术规划中的一项具有挑战性且重要的任务。尽管基于单一特征的融合策略,如 Densefuse,已经取得了令人鼓舞的性能,但它往往不能完全保留源图像特征。在本文中,提出了一种基于分类器的特征合成的深度多融合框架,用于自动融合多模态医学图像。它由一个基于密集连接的预训练自动编码器、一个特征分类器和一个带有高频和低频分别融合的多级联融合解码器组成。编码器和解码器从 MS-COCO 数据集转移而来,并在多模态医学图像公共数据集上同时进行预训练,以提取特征。特征分类通过高斯高通滤波和峰值信噪比阈值处理进行,然后将预训练的 Dense-Block 和解码器的每个层中的特征图分为高频和低频序列。具体来说,在提出的特征融合块中,使用参数自适应脉冲耦合神经网络和 l 权值分别融合高频和低频。最后,我们在总解码特征阶段设计了一种新的多级联融合解码器,以选择性地融合来自不同模态的有用信息。我们还使用融合图像对脑疾病分类进行了验证,通过统计显著性检验表明,分类性能的提高是由于融合所致。实验结果表明,该方法在定性和定量评估方面都达到了最新水平。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验