Suppr超能文献

具有双路径细节增强和全局上下文感知的无监督跨模态生物医学图像融合框架

Unsupervised cross-modal biomedical image fusion framework with dual-path detail enhancement and global context awareness.

作者信息

Liu Yao, Chen Wujie, Huang Zhen-Li, Wang ZhengXia

机构信息

School of Computer Science and Technology, Hainan University, Haikou 570228, China.

Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya 570228, China.

出版信息

Biomed Opt Express. 2025 Jul 25;16(8):3378-3394. doi: 10.1364/BOE.562137. eCollection 2025 Aug 1.

Abstract

Fluorescence imaging and phase-contrast imaging are two important imaging techniques in molecular biology research. Green fluorescent protein images can locate high-intensity protein regions in Arabidopsis cells, while phase-contrast images provide information on cellular structures. The fusion of these two types of images facilitates protein localization and interaction studies. However, traditional multimodal optical imaging systems have complex optical components and cumbersome operations. Although deep learning has provided new solutions for multimodal image fusion, existing methods are usually based on convolution operations, which have limitations such as ignoring long-range contextual information and losing detailed information. To address these limitations, we propose an unsupervised cross-modal biomedical image fusion framework, called UCBFusion. First, we design a dual-branch feature extraction module to retain the local detail information of each modality and prevent the loss of texture details during convolution operations. Second, we introduce a context-aware attention fusion module to enhance the ability to extract global features and establish long-range relationships. Lastly, our framework adopts an interactive parallel architecture to achieve the interactive fusion of local and global information. Experimental results on Arabidopsis thaliana datasets and other image fusion tasks indicate that UCBFusion achieves superior fusion results compared with state-of-the-art algorithms, in terms of performance and generalization ability across different types of datasets. This study provides a crucial driving force for the development of Arabidopsis thaliana research.

摘要

荧光成像和相衬成像是分子生物学研究中的两种重要成像技术。绿色荧光蛋白图像可以定位拟南芥细胞中的高强度蛋白质区域,而相衬图像则提供细胞结构信息。这两种类型图像的融合有助于蛋白质定位和相互作用研究。然而,传统的多模态光学成像系统具有复杂的光学组件和繁琐的操作。尽管深度学习为多模态图像融合提供了新的解决方案,但现有方法通常基于卷积操作,存在忽略远距离上下文信息和丢失细节信息等局限性。为了解决这些局限性,我们提出了一种无监督的跨模态生物医学图像融合框架,称为UCBFusion。首先,我们设计了一个双分支特征提取模块,以保留每个模态的局部细节信息,并防止在卷积操作过程中纹理细节的丢失。其次,我们引入了一个上下文感知注意力融合模块,以增强提取全局特征和建立远距离关系的能力。最后,我们的框架采用交互式并行架构来实现局部和全局信息的交互式融合。在拟南芥数据集和其他图像融合任务上的实验结果表明,与现有算法相比,UCBFusion在性能和跨不同类型数据集的泛化能力方面都取得了优异的融合结果。这项研究为拟南芥研究的发展提供了关键驱动力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/255c/12339351/3b94d4d92d34/boe-16-8-3378-g001.jpg

相似文献

1
Unsupervised cross-modal biomedical image fusion framework with dual-path detail enhancement and global context awareness.
Biomed Opt Express. 2025 Jul 25;16(8):3378-3394. doi: 10.1364/BOE.562137. eCollection 2025 Aug 1.
5
Image dehazing algorithm based on deep transfer learning and local mean adaptation.
Sci Rep. 2025 Jul 31;15(1):27956. doi: 10.1038/s41598-025-13613-z.
6
Influence of early through late fusion on pancreas segmentation from imperfectly registered multimodal magnetic resonance imaging.
J Med Imaging (Bellingham). 2025 Mar;12(2):024008. doi: 10.1117/1.JMI.12.2.024008. Epub 2025 Apr 26.
7
DGCFNet: Dual Global Context Fusion Network for remote sensing image semantic segmentation.
PeerJ Comput Sci. 2025 Mar 27;11:e2786. doi: 10.7717/peerj-cs.2786. eCollection 2025.
8
SG-Fusion: A swin-transformer and graph convolution-based multi-modal deep neural network for glioma prognosis.
Artif Intell Med. 2024 Nov;157:102972. doi: 10.1016/j.artmed.2024.102972. Epub 2024 Aug 31.
9
A medical image classification method based on self-regularized adversarial learning.
Med Phys. 2024 Nov;51(11):8232-8246. doi: 10.1002/mp.17320. Epub 2024 Jul 30.
10
Leveraging a foundation model zoo for cell similarity search in oncological microscopy across devices.
Front Oncol. 2025 Jun 18;15:1480384. doi: 10.3389/fonc.2025.1480384. eCollection 2025.

本文引用的文献

1
Arabidopsis as a model for translational research.
Plant Cell. 2025 May 9;37(5). doi: 10.1093/plcell/koae065.
2
The pan-genome and local adaptation of Arabidopsis thaliana.
Nat Commun. 2023 Oct 6;14(1):6259. doi: 10.1038/s41467-023-42029-4.
3
MATR: Multimodal Medical Image Fusion via Multiscale Adaptive Transformer.
IEEE Trans Image Process. 2022;31:5134-5149. doi: 10.1109/TIP.2022.3193288. Epub 2022 Aug 2.
4
Contextual Transformer Networks for Visual Recognition.
IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):1489-1500. doi: 10.1109/TPAMI.2022.3164083. Epub 2023 Jan 6.
5
U2Fusion: A Unified Unsupervised Image Fusion Network.
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):502-518. doi: 10.1109/TPAMI.2020.3012548. Epub 2021 Dec 8.
7
Green Fluorescent Protein and Phase-Contrast Image Fusion via Generative Adversarial Networks.
Comput Math Methods Med. 2019 Dec 4;2019:5450373. doi: 10.1155/2019/5450373. eCollection 2019.
8
A fusion algorithm for GFP image and phase contrast image of Arabidopsis cell based on SFL-contourlet transform.
Comput Math Methods Med. 2013;2013:635040. doi: 10.1155/2013/635040. Epub 2013 Feb 14.
9
Image information and visual quality.
IEEE Trans Image Process. 2006 Feb;15(2):430-44. doi: 10.1109/tip.2005.859378.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验