Liu Yao, Chen Wujie, Huang Zhen-Li, Wang ZhengXia
School of Computer Science and Technology, Hainan University, Haikou 570228, China.
Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya 570228, China.
Biomed Opt Express. 2025 Jul 25;16(8):3378-3394. doi: 10.1364/BOE.562137. eCollection 2025 Aug 1.
Fluorescence imaging and phase-contrast imaging are two important imaging techniques in molecular biology research. Green fluorescent protein images can locate high-intensity protein regions in Arabidopsis cells, while phase-contrast images provide information on cellular structures. The fusion of these two types of images facilitates protein localization and interaction studies. However, traditional multimodal optical imaging systems have complex optical components and cumbersome operations. Although deep learning has provided new solutions for multimodal image fusion, existing methods are usually based on convolution operations, which have limitations such as ignoring long-range contextual information and losing detailed information. To address these limitations, we propose an unsupervised cross-modal biomedical image fusion framework, called UCBFusion. First, we design a dual-branch feature extraction module to retain the local detail information of each modality and prevent the loss of texture details during convolution operations. Second, we introduce a context-aware attention fusion module to enhance the ability to extract global features and establish long-range relationships. Lastly, our framework adopts an interactive parallel architecture to achieve the interactive fusion of local and global information. Experimental results on Arabidopsis thaliana datasets and other image fusion tasks indicate that UCBFusion achieves superior fusion results compared with state-of-the-art algorithms, in terms of performance and generalization ability across different types of datasets. This study provides a crucial driving force for the development of Arabidopsis thaliana research.
荧光成像和相衬成像是分子生物学研究中的两种重要成像技术。绿色荧光蛋白图像可以定位拟南芥细胞中的高强度蛋白质区域,而相衬图像则提供细胞结构信息。这两种类型图像的融合有助于蛋白质定位和相互作用研究。然而,传统的多模态光学成像系统具有复杂的光学组件和繁琐的操作。尽管深度学习为多模态图像融合提供了新的解决方案,但现有方法通常基于卷积操作,存在忽略远距离上下文信息和丢失细节信息等局限性。为了解决这些局限性,我们提出了一种无监督的跨模态生物医学图像融合框架,称为UCBFusion。首先,我们设计了一个双分支特征提取模块,以保留每个模态的局部细节信息,并防止在卷积操作过程中纹理细节的丢失。其次,我们引入了一个上下文感知注意力融合模块,以增强提取全局特征和建立远距离关系的能力。最后,我们的框架采用交互式并行架构来实现局部和全局信息的交互式融合。在拟南芥数据集和其他图像融合任务上的实验结果表明,与现有算法相比,UCBFusion在性能和跨不同类型数据集的泛化能力方面都取得了优异的融合结果。这项研究为拟南芥研究的发展提供了关键驱动力。