IEEE Trans Image Process. 2022;31:5559-5569. doi: 10.1109/TIP.2022.3195643. Epub 2022 Aug 26.
Generating multi-sentence descriptions for video is considered to be the most complex task in computer vision and natural language understanding due to the intricate nature of video-text data. With the recent advances in deep learning approaches, the multi-sentence video description has achieved an impressive progress. However, learning rich temporal context representation of visual sequences and modelling long-term dependencies of natural language descriptions is still a challenging problem. Towards this goal, we propose an Attentive Atrous Pyramid network and Memory Incorporated Transformer (AAP-MIT) for multi-sentence video description. The proposed AAP-MIT incorporates the effective representation of visual scene by distilling the most informative and discriminative spatio-temporal features of video data at multiple granularities and further generates the highly summarized descriptions. Profoundly, we construct AAP-MIT with three major components: i) a temporal pyramid network, which builds the temporal feature hierarchy at multiple scales by convolving the local features at temporal space, ii) a temporal correlation attention to learn the relations among various temporal video segments, and iii) the memory incorporated transformer, which augments the new memory block in language transformer to generate highly descriptive natural language sentences. Finally, the extensive experiments on ActivityNet Captions and YouCookII datasets demonstrate the substantial superiority of AAP-MIT over the existing approaches.
生成多句视频描述被认为是计算机视觉和自然语言理解中最复杂的任务,因为视频-文本数据的复杂性。随着深度学习方法的最新进展,多句视频描述已经取得了令人瞩目的进展。然而,学习丰富的视觉序列的时间上下文表示和建模自然语言描述的长期依赖性仍然是一个具有挑战性的问题。针对这一目标,我们提出了一种用于多句视频描述的注意多孔金字塔网络和记忆集成 Transformer(AAP-MIT)。所提出的 AAP-MIT 通过在多个粒度上提取视频数据最具信息量和判别力的时空特征,有效地表示视觉场景,并进一步生成高度概括的描述。深刻地说,我们用三个主要组件构建了 AAP-MIT:i)一个时间金字塔网络,它通过在时间空间上卷积局部特征来构建多个尺度的时间特征层次结构,ii)一个时间相关注意,用于学习各种时间视频片段之间的关系,以及 iii)记忆集成 Transformer,它在语言 Transformer 中增加新的记忆块,以生成高度描述性的自然语言句子。最后,在 ActivityNet Captions 和 YouCookII 数据集上的广泛实验证明了 AAP-MIT 相对于现有方法的实质性优势。