Suppr超能文献

一种用于视频显著性预测的时空循环神经网络。

A Spatial-Temporal Recurrent Neural Network for Video Saliency Prediction.

作者信息

Zhang Kao, Chen Zhenzhong, Liu Shan

出版信息

IEEE Trans Image Process. 2021;30:572-587. doi: 10.1109/TIP.2020.3036749. Epub 2020 Nov 24.

Abstract

In this paper, a recurrent neural network is designed for video saliency prediction considering spatial-temporal features. In our work, video frames are routed through the static network for spatial features and the dynamic network for temporal features. For the spatial-temporal feature integration, a novel select and re-weight fusion model is proposed which can learn and adjust the fusion weights based on the spatial and temporal features in different scenes automatically. Finally, an attention-aware convolutional long short term memory (ConvLSTM) network is developed to predict salient regions based on the features extracted from consecutive frames and generate the ultimate saliency map for each video frame. The proposed method is compared with state-of-the-art saliency models on five public video saliency benchmark datasets. The experimental results demonstrate that our model can achieve advanced performance on video saliency prediction.

摘要

本文设计了一种考虑时空特征的循环神经网络用于视频显著性预测。在我们的工作中,视频帧通过用于空间特征的静态网络和用于时间特征的动态网络进行路由。对于时空特征融合,提出了一种新颖的选择和重新加权融合模型,该模型可以根据不同场景中的空间和时间特征自动学习和调整融合权重。最后,开发了一种注意力感知卷积长短期记忆(ConvLSTM)网络,基于从连续帧中提取的特征预测显著区域,并为每个视频帧生成最终的显著性图。在五个公共视频显著性基准数据集上,将所提出的方法与当前最先进的显著性模型进行了比较。实验结果表明,我们的模型在视频显著性预测方面可以取得先进的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验