• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于动作识别的具有同步时空和空间自注意力的高效视频变换器。

An Effective Video Transformer With Synchronized Spatiotemporal and Spatial Self-Attention for Action Recognition.

作者信息

Alfasly Saghir, Chui Charles K, Jiang Qingtang, Lu Jian, Xu Chen

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2496-2509. doi: 10.1109/TNNLS.2022.3190367. Epub 2024 Feb 5.

DOI:10.1109/TNNLS.2022.3190367
PMID:35857731
Abstract

Convolutional neural networks (CNNs) have come to dominate vision-based deep neural network structures in both image and video models over the past decade. However, convolution-free vision Transformers (ViTs) have recently outperformed CNN-based models in image recognition. Despite this progress, building and designing video Transformers have not yet obtained the same attention in research as image-based Transformers. While there have been attempts to build video Transformers by adapting image-based Transformers for video understanding, these Transformers still lack efficiency due to the large gap between CNN-based models and Transformers regarding the number of parameters and the training settings. In this work, we propose three techniques to improve video understanding with video Transformers. First, to derive better spatiotemporal feature representation, we propose a new spatiotemporal attention scheme, termed synchronized spatiotemporal and spatial attention (SSTSA), which derives the spatiotemporal features with temporal and spatial multiheaded self-attention (MSA) modules. It also preserves the best spatial attention by another spatial self-attention module in parallel, thereby resulting in an effective Transformer encoder. Second, a motion spotlighting module is proposed to embed the short-term motion of the consecutive input frames to the regular RGB input, which is then processed with a single-stream video Transformer. Third, a simple intraclass frame interlacing method of the input clips is proposed that serves as an effective video augmentation method. Finally, our proposed techniques have been evaluated and validated with a set of extensive experiments in this study. Our video Transformer outperforms its previous counterparts on two well-known datasets, Kinetics400 and Something-Something-v2.

摘要

在过去十年中,卷积神经网络(CNN)在图像和视频模型中主导了基于视觉的深度神经网络结构。然而,无卷积的视觉Transformer(ViT)最近在图像识别方面超越了基于CNN的模型。尽管取得了这一进展,但构建和设计视频Transformer在研究中尚未获得与基于图像的Transformer相同的关注。虽然有人尝试通过将基于图像的Transformer改编用于视频理解来构建视频Transformer,但由于基于CNN的模型和Transformer在参数数量和训练设置方面存在巨大差距,这些Transformer仍然缺乏效率。在这项工作中,我们提出了三种技术来改进视频Transformer的视频理解能力。首先,为了获得更好的时空特征表示,我们提出了一种新的时空注意力方案,称为同步时空和空间注意力(SSTSA),它通过时间和空间多头自注意力(MSA)模块来推导时空特征。它还通过另一个并行的空间自注意力模块保留最佳的空间注意力,从而得到一个有效的Transformer编码器。其次,提出了一个运动聚焦模块,将连续输入帧的短期运动嵌入到常规的RGB输入中,然后用单流视频Transformer进行处理。第三,提出了一种简单的输入剪辑类内帧交织方法,作为一种有效的视频增强方法。最后,在本研究中,我们通过一系列广泛的实验对所提出的技术进行了评估和验证。我们的视频Transformer在两个著名的数据集Kinetics400和Something-Something-v2上优于之前的同类产品。

相似文献

1
An Effective Video Transformer With Synchronized Spatiotemporal and Spatial Self-Attention for Action Recognition.一种用于动作识别的具有同步时空和空间自注意力的高效视频变换器。
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2496-2509. doi: 10.1109/TNNLS.2022.3190367. Epub 2024 Feb 5.
2
UAT: Universal Attention Transformer for Video Captioning.UAT:用于视频字幕的通用注意力转换器。
Sensors (Basel). 2022 Jun 25;22(13):4817. doi: 10.3390/s22134817.
3
TGDAUNet: Transformer and GCNN based dual-branch attention UNet for medical image segmentation.TGDAUNet:基于 Transformer 和 GCNN 的双分支注意力 U-Net 用于医学图像分割。
Comput Biol Med. 2023 Dec;167:107583. doi: 10.1016/j.compbiomed.2023.107583. Epub 2023 Oct 21.
4
Learning Cross-Attention Discriminators via Alternating Time-Space Transformers for Visual Tracking.通过交替时空变换器学习交叉注意力鉴别器用于视觉跟踪
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):15156-15169. doi: 10.1109/TNNLS.2023.3282905. Epub 2024 Oct 29.
5
RT-ViT: Real-Time Monocular Depth Estimation Using Lightweight Vision Transformers.RT-ViT:基于轻量级视觉Transformer 的实时单目深度估计。
Sensors (Basel). 2022 May 19;22(10):3849. doi: 10.3390/s22103849.
6
Transformer based on channel-spatial attention for accurate classification of scenes in remote sensing image.基于通道-空间注意力的Transformer 用于遥感图像中场景的精确分类。
Sci Rep. 2022 Sep 14;12(1):15473. doi: 10.1038/s41598-022-19831-z.
7
Two-Level Attention Module Based on Spurious-3D Residual Networks for Human Action Recognition.基于伪 3D 残差网络的两级注意模块的人体动作识别。
Sensors (Basel). 2023 Feb 3;23(3):1707. doi: 10.3390/s23031707.
8
Video Summarization With Spatiotemporal Vision Transformer.基于时空视觉Transformer 的视频摘要
IEEE Trans Image Process. 2023;32:3013-3026. doi: 10.1109/TIP.2023.3275069. Epub 2023 May 26.
9
Deeply Coupled Convolution-Transformer With Spatial-Temporal Complementary Learning for Video-Based Person Re-Identification.基于时空互补学习的深度耦合卷积-Transformer用于视频人物重识别
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):13753-13763. doi: 10.1109/TNNLS.2023.3271353. Epub 2024 Oct 7.
10
WLiT: Windows and Linear Transformer for Video Action Recognition.WLiT:用于视频动作识别的 Windows 和线性变换。
Sensors (Basel). 2023 Feb 2;23(3):1616. doi: 10.3390/s23031616.

引用本文的文献

1
LS-VIT: Vision Transformer for action recognition based on long and short-term temporal difference.LS-VIT:基于长期和短期时间差异的用于动作识别的视觉Transformer
Front Neurorobot. 2024 Oct 31;18:1457843. doi: 10.3389/fnbot.2024.1457843. eCollection 2024.
2
A Dynamic Position Embedding-Based Model for Student Classroom Complete Meta-Action Recognition.一种基于动态位置嵌入的学生课堂完整元动作识别模型。
Sensors (Basel). 2024 Aug 20;24(16):5371. doi: 10.3390/s24165371.
3
WLiT: Windows and Linear Transformer for Video Action Recognition.
WLiT:用于视频动作识别的 Windows 和线性变换。
Sensors (Basel). 2023 Feb 2;23(3):1616. doi: 10.3390/s23031616.