Zhang Wenxiang, Wang Chunmeng, Zhu Jun
School of Computer Engineering, Jinling Institute of Technology, Nanjing 211169, China.
Sensors (Basel). 2025 Apr 16;25(8):2500. doi: 10.3390/s25082500.
Recently, deep learning-based multi-exposure image fusion methods have been widely explored due to their high efficiency and adaptability. However, most existing multi-exposure image fusion methods have insufficient feature extraction ability for recovering information and details in extremely exposed areas. In order to solve this problem, we propose a multi-exposure image fusion method based on a low-resolution context aggregation attention network (MEF-CAAN). First, we feed the low-resolution version of the input images to CAAN to predict their low-resolution weight maps. Then, the high-resolution weight maps are generated by guided filtering for upsampling (GFU). Finally, the high-resolution fused image is generated by a weighted summation operation. Our proposed network is unsupervised and adaptively adjusts the weights of channels to achieve better feature extraction. Experimental results show that our method outperforms existing state-of-the-art methods by both quantitative and qualitative evaluation.
近年来,基于深度学习的多曝光图像融合方法因其高效性和适应性而得到广泛探索。然而,现有的大多数多曝光图像融合方法在恢复极端曝光区域的信息和细节方面,特征提取能力不足。为了解决这个问题,我们提出了一种基于低分辨率上下文聚合注意力网络(MEF-CAAN)的多曝光图像融合方法。首先,我们将输入图像的低分辨率版本输入到CAAN中,以预测其低分辨率权重图。然后,通过引导滤波上采样(GFU)生成高分辨率权重图。最后,通过加权求和操作生成高分辨率融合图像。我们提出的网络是无监督的,并且可以自适应地调整通道权重以实现更好的特征提取。实验结果表明,我们的方法在定量和定性评估方面均优于现有的最先进方法。