Chen Haoran, Lin Ke, Maye Alexander, Li Jianmin, Hu Xiaolin
The State Key Laboratory of Intelligent Technology and Systems, Department of Computer Science and Technology, Beijing National Research Center for Information Science and Technology, Institute for Artificial Intelligence, Tsinghua University, Beijing, China.
Samsung Research China, Beijing, China.
Front Robot AI. 2020 Sep 30;7:475767. doi: 10.3389/frobt.2020.475767. eCollection 2020.
Given the features of a video, recurrent neural networks can be used to automatically generate a caption for the video. Existing methods for video captioning have at least three limitations. First, semantic information has been widely applied to boost the performance of video captioning models, but existing networks often fail to provide meaningful semantic features. Second, the Teacher Forcing algorithm is often utilized to optimize video captioning models, but during training and inference, different strategies are applied to guide word generation, leading to poor performance. Third, current video captioning models are prone to generate relatively short captions that express video contents inappropriately. Toward resolving these three problems, we suggest three corresponding improvements. First of all, we propose a metric to compare the quality of semantic features, and utilize appropriate features as input for a semantic detection network (SDN) with adequate complexity in order to generate meaningful semantic features for videos. Then, we apply a scheduled sampling strategy that gradually transfers the training phase from a teacher-guided manner toward a more self-teaching manner. Finally, the ordinary logarithm probability loss function is leveraged by sentence length so that the inclination of generating short sentences is alleviated. Our model achieves better results than previous models on the YouTube2Text dataset and is competitive with the previous best model on the MSR-VTT dataset.
鉴于视频的特征,循环神经网络可用于自动为视频生成字幕。现有的视频字幕方法至少存在三个局限性。首先,语义信息已被广泛应用于提升视频字幕模型的性能,但现有网络往往无法提供有意义的语义特征。其次,教师强制算法经常被用于优化视频字幕模型,但在训练和推理过程中,应用了不同的策略来指导单词生成,导致性能不佳。第三,当前的视频字幕模型容易生成相对较短的字幕,无法恰当地表达视频内容。为了解决这三个问题,我们提出了三项相应的改进措施。首先,我们提出一种度量标准来比较语义特征的质量,并利用适当的特征作为具有足够复杂度的语义检测网络(SDN)的输入,以便为视频生成有意义的语义特征。然后,我们应用一种调度采样策略,该策略将训练阶段从教师指导方式逐渐转变为更自主学习的方式。最后,通过句子长度利用自然对数概率损失函数,从而减轻生成短句子的倾向。我们的模型在YouTube2Text数据集上比以前的模型取得了更好的结果,并且在MSR-VTT数据集上与之前的最佳模型具有竞争力。