• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Discriminative latent models for recognizing contextual group activities.用于识别上下文群组活动的判别潜在模型。
IEEE Trans Pattern Anal Mach Intell. 2012 Aug;34(8):1549-62. doi: 10.1109/TPAMI.2011.228.
2
Learning person-person interaction in collective activity recognition.学习集体活动识别中的人与人交互。
IEEE Trans Image Process. 2015 Jun;24(6):1905-18. doi: 10.1109/TIP.2015.2409564. Epub 2015 Mar 6.
3
Optimizing nondecomposable loss functions in structured prediction.优化结构预测中的不可分解损失函数。
IEEE Trans Pattern Anal Mach Intell. 2013 Apr;35(4):911-24. doi: 10.1109/TPAMI.2012.168.
4
Understanding Collective Activities of People from Videos.理解视频中人们的集体活动。
IEEE Trans Pattern Anal Mach Intell. 2014 Jun;36(6):1242-57. doi: 10.1109/TPAMI.2013.220.
5
An interaction-embedded HMM framework for human behavior understanding: with nursing environments as examples.一种用于人类行为理解的嵌入交互的隐马尔可夫模型框架:以护理环境为例。
IEEE Trans Inf Technol Biomed. 2010 Sep;14(5):1236-46. doi: 10.1109/TITB.2010.2052061. Epub 2010 Jun 7.
6
Animated pose templates for modeling and detecting human actions.用于建模和检测人体动作的动画姿势模板。
IEEE Trans Pattern Anal Mach Intell. 2014 Mar;36(3):436-52. doi: 10.1109/TPAMI.2013.144.
7
Learning sparse representations for human action recognition.学习人类动作识别的稀疏表示。
IEEE Trans Pattern Anal Mach Intell. 2012 Aug;34(8):1576-88. doi: 10.1109/TPAMI.2011.253.
8
Close Human Interaction Recognition Using Patch-Aware Models.基于补丁感知模型的近距人类交互识别
IEEE Trans Image Process. 2016 Jan;25(1):167-78. doi: 10.1109/TIP.2015.2498410. Epub 2015 Nov 5.
9
Human Interaction Understanding With Joint Graph Decomposition and Node Labeling.基于联合图分解和节点标注的人际交互理解
IEEE Trans Image Process. 2021;30:6240-6254. doi: 10.1109/TIP.2021.3093383. Epub 2021 Jul 12.
10
Explicit modeling of human-object interactions in realistic videos.真实视频中人类-物体交互的显式建模。
IEEE Trans Pattern Anal Mach Intell. 2013 Apr;35(4):835-48. doi: 10.1109/TPAMI.2012.175.

引用本文的文献

1
Vision Sensor for Automatic Recognition of Human Activities via Hybrid Features and Multi-Class Support Vector Machine.基于混合特征和多类支持向量机的用于自动识别人类活动的视觉传感器
Sensors (Basel). 2025 Jan 1;25(1):200. doi: 10.3390/s25010200.
2
HAtt-Flow: Hierarchical Attention-Flow Mechanism for Group-Activity Scene Graph Generation in Videos.HAtt-Flow:用于视频中群体活动场景图生成的分层注意力流机制
Sensors (Basel). 2024 May 24;24(11):3372. doi: 10.3390/s24113372.
3
A Novel Deep Neural Network Method for HAR-Based Team Training Using Body-Worn Inertial Sensors.基于穿戴式惯性传感器的 HAR 团队训练的新型深度神经网络方法
Sensors (Basel). 2022 Nov 4;22(21):8507. doi: 10.3390/s22218507.
4
Multi-Perspective Representation to Part-Based Graph for Group Activity Recognition.基于图的多视角表示的部分群组活动识别。
Sensors (Basel). 2022 Jul 24;22(15):5521. doi: 10.3390/s22155521.
5
3DMesh-GAR: 3D Human Body Mesh-Based Method for Group Activity Recognition.3DMesh-GAR:基于 3D 人体网格的群体活动识别方法。
Sensors (Basel). 2022 Feb 14;22(4):1464. doi: 10.3390/s22041464.
6
A Novel Fiber Optic Based Surveillance System for Prevention of Pipeline Integrity Threats.一种基于光纤的新型管道完整性威胁预防监测系统。
Sensors (Basel). 2017 Feb 12;17(2):355. doi: 10.3390/s17020355.

本文引用的文献

1
Observing human-object interactions: using spatial and functional compatibility for recognition.观察人与物体的交互:利用空间和功能兼容性进行识别。
IEEE Trans Pattern Anal Mach Intell. 2009 Oct;31(10):1775-89. doi: 10.1109/TPAMI.2009.83.
2
Actions as space-time shapes.作为时空形态的行动。
IEEE Trans Pattern Anal Mach Intell. 2007 Dec;29(12):2247-53. doi: 10.1109/TPAMI.2007.70711.
3
Hidden conditional random fields.隐条件随机字段
IEEE Trans Pattern Anal Mach Intell. 2007 Oct;29(10):1848-53. doi: 10.1109/TPAMI.2007.1124.
4
Scene perception: detecting and judging objects undergoing relational violations.场景感知:检测和判断正在经历关系违反的物体。
Cogn Psychol. 1982 Apr;14(2):143-77. doi: 10.1016/0010-0285(82)90007-x.

用于识别上下文群组活动的判别潜在模型。

Discriminative latent models for recognizing contextual group activities.

机构信息

School of Computing Science, Simon Fraser University, Burnaby, BC, Canada.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2012 Aug;34(8):1549-62. doi: 10.1109/TPAMI.2011.228.

DOI:10.1109/TPAMI.2011.228
PMID:22144516
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC3471989/
Abstract

In this paper, we go beyond recognizing the actions of individuals and focus on group activities. This is motivated from the observation that human actions are rarely performed in isolation; the contextual information of what other people in the scene are doing provides a useful cue for understanding high-level activities. We propose a novel framework for recognizing group activities which jointly captures the group activity, the individual person actions, and the interactions among them. Two types of contextual information, group-person interaction and person-person interaction, are explored in a latent variable framework. In particular, we propose three different approaches to model the person-person interaction. One approach is to explore the structures of person-person interaction. Differently from most of the previous latent structured models, which assume a predefined structure for the hidden layer, e.g., a tree structure, we treat the structure of the hidden layer as a latent variable and implicitly infer it during learning and inference. The second approach explores person-person interaction in the feature level. We introduce a new feature representation called the action context (AC) descriptor. The AC descriptor encodes information about not only the action of an individual person in the video, but also the behavior of other people nearby. The third approach combines the above two. Our experimental results demonstrate the benefit of using contextual information for disambiguating group activities.

摘要

在本文中,我们超越了识别个体行为的范畴,专注于群体活动。这是基于这样一种观察:人类行为很少是孤立进行的;场景中其他人在做什么的上下文信息为理解高级活动提供了有用的线索。我们提出了一个新的框架来识别群体活动,该框架共同捕捉了群体活动、个体行为以及它们之间的相互作用。在潜在变量框架中探索了两种类型的上下文信息,即群体-人交互和人-人交互。具体来说,我们提出了三种不同的方法来建模人-人交互。一种方法是探索人-人交互的结构。与大多数之前的潜在结构模型不同,这些模型假设隐藏层的预定义结构,例如树结构,我们将隐藏层的结构视为一个潜在变量,并在学习和推理过程中隐式推断它。第二种方法是在特征层面探索人-人交互。我们引入了一种新的特征表示,称为动作上下文(AC)描述符。AC 描述符不仅编码了视频中个体行为的信息,还编码了附近其他人的行为信息。第三种方法结合了上述两种方法。我们的实验结果证明了使用上下文信息来消除群体活动歧义的好处。