Fang Yuming, Zhang Chi, Huang Hanqin, Lei Jianjun
IEEE Trans Image Process. 2019 Nov;28(11):5253-5265. doi: 10.1109/TIP.2019.2916766. Epub 2019 May 20.
Visual attention is an important mechanism in the human visual system (HVS) and there have been numerous saliency detection algorithms designed for 2D images/video recently. However, the research for fixation detection of stereoscopic video is still limited and challenging due to the complicated depth and motion information. In this paper, we design a novel multi-module fully convolutional network (MM-FCN) for fixation detection of stereoscopic video. Specifically, we design a fully convolutional network for spatial saliency prediction (S-FCN), where the initial spatial saliency map of stereoscopic video is learned by image database of object detection. Furthermore, the fully convolutional network for temporal saliency prediction (T-FCN) is constructed by combining saliency results from S-FCN and motion information from video frames. Finally, the fully convolutional network for depth fixation prediction (D-FCN) is designed to compute the final fixation map of stereoscopic video by learning depth features with spatiotemporal features from T-FCN. The experimental results show that the proposed MM-FCN can predict fixation results for stereoscopic video more effectively and efficiently than other related fixation prediction methods.
视觉注意力是人类视觉系统(HVS)中的一种重要机制,近年来已经有许多针对二维图像/视频设计的显著性检测算法。然而,由于立体视频中复杂的深度和运动信息,立体视频注视点检测的研究仍然有限且具有挑战性。在本文中,我们设计了一种新颖的多模块全卷积网络(MM-FCN)用于立体视频的注视点检测。具体而言,我们设计了一个用于空间显著性预测的全卷积网络(S-FCN),通过目标检测的图像数据库来学习立体视频的初始空间显著性图。此外,通过结合S-FCN的显著性结果和视频帧的运动信息构建了用于时间显著性预测的全卷积网络(T-FCN)。最后,设计了用于深度注视点预测的全卷积网络(D-FCN),通过从T-FCN学习具有时空特征的深度特征来计算立体视频的最终注视点图。实验结果表明,所提出的MM-FCN比其他相关的注视点预测方法能够更有效、更高效地预测立体视频的注视点结果。