Liu Shangwang, Yang Lihan
College of Computer and Information Engineering, Henan Normal University, Xinxiang 453007, China.
Engineering Lab of Intelligence Business & Internet of Things, Xinxiang 453007, China.
Entropy (Basel). 2022 Dec 14;24(12):1823. doi: 10.3390/e24121823.
Single-modality medical images often cannot contain sufficient valid information to meet the information requirements of clinical diagnosis. The diagnostic efficiency is always limited by observing multiple images at the same time. Image fusion is a technique that combines functional modalities such as positron emission computed tomography (PET) and single-photon emission computed tomography (SPECT) with anatomical modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) to supplement the complementary information. Meanwhile, fusing two anatomical images (like CT-MRI) is often required to replace single MRI, and the fused images can improve the efficiency and accuracy of clinical diagnosis. To this end, in order to achieve high-quality, high-resolution and rich-detail fusion without artificial prior, an unsupervised deep learning image fusion framework is proposed in this paper. It is named the back project dense generative adversarial network (BPDGAN) framework. In particular, we construct a novel network based on the back project dense block (BPDB) and convolutional block attention module (CBAM). The BPDB can effectively mitigate the impact of black backgrounds on image content. Conversely, the CBAM improves the performance of BPDGAN on the texture and edge information. To conclude, qualitative and quantitative experiments are tested to demonstrate the superiority of BPDGAN. In terms of quantitative metrics, BPDGAN outperforms the state-of-the-art comparisons by approximately 19.58%, 14.84%, 10.40% and 86.78% on AG, EI, Q and Q metrics, respectively.
单模态医学图像往往无法包含足够的有效信息来满足临床诊断的信息需求。通过同时观察多幅图像,诊断效率总是受到限制。图像融合是一种将正电子发射断层扫描(PET)和单光子发射断层扫描(SPECT)等功能模态与计算机断层扫描(CT)和磁共振成像(MRI)等解剖模态相结合以补充互补信息的技术。同时,通常需要融合两幅解剖图像(如CT-MRI)来替代单一的MRI,并且融合后的图像可以提高临床诊断的效率和准确性。为此,为了在无人工先验的情况下实现高质量、高分辨率和细节丰富的融合,本文提出了一种无监督深度学习图像融合框架。它被命名为反向投影密集生成对抗网络(BPDGAN)框架。具体而言,我们基于反向投影密集块(BPDB)和卷积块注意力模块(CBAM)构建了一个新颖的网络。BPDB可以有效减轻黑色背景对图像内容的影响。相反,CBAM提高了BPDGAN在纹理和边缘信息方面的性能。总之,通过定性和定量实验来证明BPDGAN的优越性。在定量指标方面,BPDGAN在AG、EI、Q和Q指标上分别比现有最先进的比较方法高出约19.58%、14.84%、10.40%和86.78%。