• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于组合动作识别的渐进式实例感知特征学习

Progressive Instance-Aware Feature Learning for Compositional Action Recognition.

作者信息

Yan Rui, Xie Lingxi, Shu Xiangbo, Zhang Liyan, Tang Jinhui

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):10317-10330. doi: 10.1109/TPAMI.2023.3261659. Epub 2023 Jun 30.

DOI:10.1109/TPAMI.2023.3261659
PMID:37030795
Abstract

In order to enable the model to generalize to unseen "action-objects" (compositional action), previous methods encode multiple pieces of information (i.e., the appearance, position, and identity of visual instances) independently and concatenate them for classification. However, these methods ignore the potential supervisory role of instance information (i.e., position and identity) in the process of visual perception. To this end, we present a novel framework, namely Progressive Instance-aware Feature Learning (PIFL), to progressively extract, reason, and predict dynamic cues of moving instances from videos for compositional action recognition. Specifically, this framework extracts features from foreground instances that are likely to be relevant to human actions (Position-aware Appearance Feature Extraction in Section III-B1), performs identity-aware reasoning among instance-centric features with semantic-specific interactions (Identity-aware Feature Interaction in Section III-B2), and finally predicts instances' position from observed states to force the model into perceiving their movement (Semantic-aware Position Prediction in Section III-B3). We evaluate our approach on two compositional action recognition benchmarks, namely, Something-Else and IKEA-Assembly. Our approach achieves consistent accuracy gain beyond off-the-shelf action recognition algorithms in terms of both ground truth and detected position of instances.

摘要

为了使模型能够推广到未见过的“动作-物体”(组合动作),先前的方法独立地对多条信息(即视觉实例的外观、位置和身份)进行编码,并将它们连接起来用于分类。然而,这些方法忽略了实例信息(即位置和身份)在视觉感知过程中的潜在监督作用。为此,我们提出了一种新颖的框架,即渐进式实例感知特征学习(PIFL),用于从视频中逐步提取、推理和预测移动实例的动态线索,以进行组合动作识别。具体而言,该框架从可能与人类动作相关的前景实例中提取特征(第三节B1中的位置感知外观特征提取),通过语义特定的交互在以实例为中心的特征之间进行身份感知推理(第三节B2中的身份感知特征交互),最后根据观察到的状态预测实例的位置,以迫使模型感知它们的运动(第三节B3中的语义感知位置预测)。我们在两个组合动作识别基准上评估了我们的方法,即Something-Else和宜家组装。在实例的真实位置和检测位置方面,我们的方法在现成的动作识别算法之上都实现了一致的准确率提升。

相似文献

1
Progressive Instance-Aware Feature Learning for Compositional Action Recognition.用于组合动作识别的渐进式实例感知特征学习
IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):10317-10330. doi: 10.1109/TPAMI.2023.3261659. Epub 2023 Jun 30.
2
Semantic-Disentangled Transformer With Noun-Verb Embedding for Compositional Action Recognition.用于组合动作识别的具有名词-动词嵌入的语义解缠变压器。
IEEE Trans Image Process. 2024;33:297-309. doi: 10.1109/TIP.2023.3341297. Epub 2023 Dec 21.
3
Compositional action recognition with multi-view feature fusion.基于多视角特征融合的成分动作识别。
PLoS One. 2022 Apr 14;17(4):e0266259. doi: 10.1371/journal.pone.0266259. eCollection 2022.
4
Human-Centric Transformer for Domain Adaptive Action Recognition.用于域自适应动作识别的以人为中心的Transformer
IEEE Trans Pattern Anal Mach Intell. 2025 Feb;47(2):679-696. doi: 10.1109/TPAMI.2024.3429387. Epub 2025 Jan 9.
5
Interaction-Aware Spatio-Temporal Pyramid Attention Networks for Action Classification.基于交互感知时空金字塔注意力网络的动作分类。
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):7010-7028. doi: 10.1109/TPAMI.2021.3100277. Epub 2022 Sep 14.
6
Learning Semantic-Aware Local Features for Long Term Visual Localization.学习用于长期视觉定位的语义感知局部特征。
IEEE Trans Image Process. 2022;31:4842-4855. doi: 10.1109/TIP.2022.3187565. Epub 2022 Jul 20.
7
Learning Semantic-Aligned Action Representation.学习语义对齐动作表示。
IEEE Trans Neural Netw Learn Syst. 2018 Aug;29(8):3715-3725. doi: 10.1109/TNNLS.2017.2731775. Epub 2017 Aug 31.
8
Negative Deterministic Information-Based Multiple Instance Learning for Weakly Supervised Object Detection and Segmentation.用于弱监督目标检测与分割的基于负确定性信息的多实例学习
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):6188-6202. doi: 10.1109/TNNLS.2024.3395751. Epub 2025 Apr 4.
9
Object-Centric Representation Learning for Video Scene Understanding.用于视频场景理解的以对象为中心的表示学习
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):8410-8423. doi: 10.1109/TPAMI.2024.3401409. Epub 2024 Nov 6.
10
Not All Instances Contribute Equally: Instance-Adaptive Class Representation Learning for Few-Shot Visual Recognition.并非所有实例都具有同等贡献:用于少样本视觉识别的实例自适应类表示学习
IEEE Trans Neural Netw Learn Syst. 2024 Apr;35(4):5447-5460. doi: 10.1109/TNNLS.2022.3204684. Epub 2024 Apr 4.