Zou Li-hui, Zhang Dezheng, Wulamu Aziguli
School of Computer and Communication Engineering, University of Science and Technology, Beijing 100083, China ; Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing 100083, China.
ScientificWorldJournal. 2014 Feb 3;2014:981724. doi: 10.1155/2014/981724. eCollection 2014.
Dynamic scene stitching still has a great challenge in maintaining the global key information without missing or deforming if multiple motion interferences exist in the image acquisition system. Object clips, motion blurs, or other synthetic defects easily occur in the final stitching image. In our research work, we proceed from human visual cognitive mechanism and construct a hybrid-saliency-based cognitive model to automatically guide the video volume stitching. The model consists of three elements of different visual stimuli, that is, intensity, edge contour, and scene depth saliencies. Combined with the manifold-based mosaicing framework, dynamic scene stitching is formulated as a cut path optimization problem in a constructed space-time graph. The cutting energy function for column width selections is defined according to the proposed visual cognition model. The optimum cut path can minimize the cognitive saliency difference throughout the whole video volume. The experimental results show that it can effectively avoid synthetic defects caused by different motion interferences and summarize the key contents of the scene without loss. The proposed method gives full play to the role of human visual cognitive mechanism for the stitching. It is of high practical value to environmental surveillance and other applications.
如果图像采集系统中存在多种运动干扰,动态场景拼接在保持全局关键信息而不丢失或变形方面仍面临巨大挑战。在最终的拼接图像中容易出现物体剪辑、运动模糊或其他合成缺陷。在我们的研究工作中,我们从人类视觉认知机制出发,构建了一种基于混合显著性的认知模型,以自动引导视频体拼接。该模型由不同视觉刺激的三个要素组成,即强度、边缘轮廓和场景深度显著性。结合基于流形的拼接框架,将动态场景拼接表述为在构建的时空图中的切割路径优化问题。根据所提出的视觉认知模型定义列宽选择的切割能量函数。最优切割路径可以在整个视频体中最小化认知显著性差异。实验结果表明,它可以有效避免由不同运动干扰引起的合成缺陷,并无损地总结场景的关键内容。所提出的方法充分发挥了人类视觉认知机制在拼接中的作用。它对环境监测等应用具有很高的实用价值。