Wan Hui, Tang Xianlun, Zhu Zhiqin, Li Weisheng
College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China.
Entropy (Basel). 2021 Oct 19;23(10):1362. doi: 10.3390/e23101362.
Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.
多聚焦图像融合是一种重要的方法,用于将源多聚焦图像中的聚焦部分组合成一幅单一的全聚焦图像。目前,为了解决多聚焦图像融合问题,关键在于如何准确检测聚焦区域,尤其是当相机拍摄的源图像产生各向异性模糊和配准不良时。本文提出了一种基于互补信息多尺度分解的新型多聚焦图像融合方法。首先,该方法使用两组结构互补的大尺度和小尺度分解方案,分别对图像进行两尺度双层奇异值分解,得到低频和高频分量。然后,低频分量通过一种将图像局部能量与边缘能量相结合的规则进行融合。高频分量通过参数自适应脉冲耦合神经网络模型(PA-PCNN)进行融合,并根据高频分量各分解层所包含的特征信息,选择不同的细节特征作为PA-PCNN的外部刺激输入。最后,根据源图像的结构互补两尺度分解以及高低频分量的融合,得到两个具有互补信息的初始决策图。通过对初始决策图进行细化,得到最终的融合决策图,完成图像融合。此外,将所提方法与10种先进方法进行比较,以验证其有效性。实验结果表明,所提方法在图像预配准和未配准的情况下,能够更准确地区分聚焦和非聚焦区域,主观和客观评价指标略优于现有方法。