College of Information Science and Technology, Gansu Agricultural University, Lanzhou 730070, China.
School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, China.
Sensors (Basel). 2023 Feb 23;23(5):2488. doi: 10.3390/s23052488.
In heterogeneous image fusion problems, different imaging mechanisms have always existed between time-of-flight and visible light heterogeneous images which are collected by binocular acquisition systems in orchard environments. Determining how to enhance the fusion quality is key to the solution. A shortcoming of the pulse coupled neural network model is that parameters are limited by manual experience settings and cannot be terminated adaptively. The limitations are obvious during the ignition process, and include ignoring the impact of image changes and fluctuations on the results, pixel artifacts, area blurring, and the occurrence of unclear edges. Aiming at these problems, an image fusion method in a pulse coupled neural network transform domain guided by a saliency mechanism is proposed. A non-subsampled shearlet transform is used to decompose the accurately registered image; the time-of-flight low-frequency component, after multiple lighting segmentation using a pulse coupled neural network, is simplified to a first-order Markov situation. The significance function is defined as first-order Markov mutual information to measure the termination condition. A new momentum-driven multi-objective artificial bee colony algorithm is used to optimize the parameters of the link channel feedback term, link strength, and dynamic threshold attenuation factor. The low-frequency components of time-of-flight and color images, after multiple lighting segmentation using a pulse coupled neural network, are fused using the weighted average rule. The high-frequency components are fused using improved bilateral filters. The results show that the proposed algorithm has the best fusion effect on the time-of-flight confidence image and the corresponding visible light image collected in the natural scene, according to nine objective image evaluation indicators. It is suitable for the heterogeneous image fusion of complex orchard environments in natural landscapes.
在双目采集系统获取的果园环境中,飞行时间和可见光异质图像之间始终存在不同的成像机制。确定如何增强融合质量是解决问题的关键。脉冲耦合神经网络模型的一个缺点是参数受到手动经验设置的限制,无法自适应终止。在点火过程中,这些限制是明显的,包括忽略图像变化和波动对结果的影响、像素伪影、区域模糊以及边缘不清晰的情况。针对这些问题,提出了一种基于显著度机制引导的脉冲耦合神经网络变换域图像融合方法。使用非下采样剪切波变换对精确配准的图像进行分解;使用脉冲耦合神经网络对飞行时间低频分量进行多次光照分割后,简化为一阶马尔可夫情况。定义显著度函数为一阶马尔可夫互信息来度量终止条件。使用新的动量驱动多目标人工蜂群算法优化链接通道反馈项、链接强度和动态阈值衰减因子的参数。使用脉冲耦合神经网络对飞行时间和彩色图像的低频分量进行多次光照分割后,使用加权平均规则进行融合。高频分量使用改进的双边滤波器进行融合。结果表明,根据九个客观图像评估指标,该算法对自然场景中采集的飞行时间置信图像和相应的可见光图像具有最佳的融合效果。该算法适用于自然景观中复杂果园环境的异质图像融合。