Suppr超能文献

用于时间动作定位的结构化注意力合成

Structured Attention Composition for Temporal Action Localization.

作者信息

Yang Le, Han Junwei, Zhao Tao, Liu Nian, Zhang Dingwen

出版信息

IEEE Trans Image Process. 2022 Jun 13;PP. doi: 10.1109/TIP.2022.3180925.

Abstract

Temporal action localization aims at localizing action instances from untrimmed videos. Existing works have designed various effective modules to precisely localize action instances based on appearance and motion features. However, by treating these two kinds of features with equal importance, previous works cannot take full advantage of each modality feature, making the learned model still sub-optimal. To tackle this issue, we make an early effort to study temporal action localization from the perspective of multi-modality feature learning, based on the observation that different actions exhibit specific preferences to appearance or motion modality. Specifically, we build a novel structured attention composition module. Unlike conventional attention, the proposed module would not infer frame attention and modality attention independently. Instead, by casting the relationship between the modality attention and the frame attention as an attention assignment process, the structured attention composition module learns to encode the frame-modality structure and uses it to regularize the inferred frame attention and modality attention, respectively, upon the optimal transport theory. The final frame-modality attention is obtained by the composition of the two individual attentions. The proposed structured attention composition module can be deployed as a plug-and-play module into existing action localization frameworks. Extensive experiments on two widely used benchmarks show that the proposed structured attention composition consistently improves four state-of-the-art temporal action localization methods and builds new state-of-the-art performance on THUMOS14.

摘要

时序动作定位旨在从未经剪辑的视频中定位动作实例。现有工作已经设计了各种有效的模块,以基于外观和运动特征精确地定位动作实例。然而,由于同等重视这两种特征,先前的工作无法充分利用每种模态特征,使得学习到的模型仍然不是最优的。为了解决这个问题,基于不同动作对外观或运动模态表现出特定偏好这一观察结果,我们率先从多模态特征学习的角度研究时序动作定位。具体来说,我们构建了一个新颖的结构化注意力合成模块。与传统注意力不同,所提出的模块不会独立推断帧注意力和模态注意力。相反,通过将模态注意力和帧注意力之间的关系视为一个注意力分配过程,结构化注意力合成模块学习对帧 - 模态结构进行编码,并分别基于最优传输理论使用它来规范推断出的帧注意力和模态注意力。最终的帧 - 模态注意力通过两种个体注意力的合成得到。所提出的结构化注意力合成模块可以作为即插即用模块部署到现有的动作定位框架中。在两个广泛使用的基准上进行的大量实验表明,所提出的结构化注意力合成方法持续改进了四种最先进的时序动作定位方法,并在THUMOS14上创造了新的最先进性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验