School of Biomedical Engineering (Suzhou), Division of Life Science and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China.
College of Mechanical & Electrical Engineering, Hohai University, Changzhou, 213022, China.
Comput Methods Programs Biomed. 2023 Oct;240:107688. doi: 10.1016/j.cmpb.2023.107688. Epub 2023 Jun 28.
Due to the depth of focus (DOF) limitations of the optical systems of microscopes, it is often difficult to achieve full clarity from microscopic biomedical images under high-magnification microscopy. Multifocus microscopic biomedical image fusion (MFBIF) can effectively solve this problem. Considering both information richness and visual authenticity, this paper proposes a transformer network for MFBIF called TransFusion-Net.
TransFusion-Net consists of two modules. One module is an interlayer cross-attention module, which is used to obtain feature mappings under the long-range dependencies observed among multiple nonfocus source images. The other module is a spatial attention upsampling network (SAU-Net) module, which is used to obtain global semantic information after further spatial attention is applied. Thus, TransFusion-Net can simultaneously receive multiple input images from a nonfull-focus microscope and make full use of the strong correlations between the source images to output accurate fusion results in an end-to-end manner.
The fusion results were quantitatively and qualitatively compared with those of eight state-of-the-art algorithms. In the quantitative experiments, five evaluation metrics, Q, Q, Q, Q, and PSNR, were used to evaluate the performance of each method, and the proposed method achieved values of 0.6574, 8.4572, 5.6305, 0.7341, and 89.5685, respectively, which are higher than those of the current state-of-the-art algorithms. In the qualitative experiments, a differential image was used for further validation, and the near-zero residuals visually verified the adequacy of the proposed method for fusion. Furthermore, we showed some fusion results of multifocused biomedical microscopy images to verify the reliability of the proposed method, which shows high-quality fusion results.
Multifocus biomedical microscopic image fusion can be accurately and effectively achieved by devising a deep convolutional neural network with joint cross-attention and spatial attention mechanisms.
由于显微镜光学系统的景深(DOF)限制,在高倍显微镜下,获得微观生物医学图像的完全清晰通常很困难。多聚焦微观生物医学图像融合(MFBIF)可以有效地解决这个问题。考虑到信息丰富度和视觉真实性,本文提出了一种用于 MFBIF 的基于Transformer 的网络,称为 TransFusion-Net。
TransFusion-Net 由两个模块组成。一个模块是层间交叉注意模块,用于获取多个非焦点源图像之间观察到的长程依赖关系下的特征映射。另一个模块是空间注意上采样网络(SAU-Net)模块,用于在进一步应用空间注意后获取全局语义信息。因此,TransFusion-Net 可以同时从非全聚焦显微镜接收多个输入图像,并充分利用源图像之间的强相关性,以端到端的方式输出准确的融合结果。
将融合结果与八种最先进算法的结果进行了定量和定性比较。在定量实验中,使用了五个评估指标,Q、Q、Q、Q 和 PSNR,来评估每种方法的性能,所提出的方法分别获得了 0.6574、8.4572、5.6305、0.7341 和 89.5685 的值,均高于当前最先进算法的结果。在定性实验中,使用差分图像进行了进一步验证,近零残差视觉上验证了所提出方法对融合的充分性。此外,我们展示了一些多聚焦生物医学显微镜图像的融合结果,以验证所提出方法的可靠性,结果显示出高质量的融合结果。
通过设计具有联合交叉注意和空间注意机制的深度卷积神经网络,可以准确有效地实现多聚焦生物医学显微镜图像融合。