• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过零样本学习扩展视频中的人类-物体交互识别

Scaling Human-Object Interaction Recognition in the Video through Zero-Shot Learning.

作者信息

Maraghi Vali Ollah, Faez Karim

机构信息

Department of Electrical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran.

出版信息

Comput Intell Neurosci. 2021 Jun 9;2021:9922697. doi: 10.1155/2021/9922697. eCollection 2021.

DOI:10.1155/2021/9922697
PMID:34211548
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8211498/
Abstract

Recognition of human activities is an essential field in computer vision. The most human activity consists of the interaction between humans and objects. Many successful works have been done on human-object interaction (HOI) recognition and achieved acceptable results in recent years. Still, they are fully supervised and need to train labeled data for all HOIs. Due to the enormous space of human-object interactions, listing and providing the training data for all possible categories is costly and impractical. We propose an approach for scaling human-object interaction recognition in video data through the zero-shot learning technique to solve this problem. Our method recognizes a verb and an object from the video and makes an HOI class. Recognition of the verbs and objects instead of HOIs allows identifying a new combination of verbs and objects. So, a new HOI class can be identified, which is not seen by the recognizer system. We introduce a neural network architecture that can understand and represent the video data. The proposed system learns verbs and objects from available training data at the training phase and can identify the verb-object pairs in a video at test time. So, the system can identify the HOI class with different combinations of objects and verbs. Also, we propose to use lateral information for combining the verbs and the objects to make valid verb-object pairs. It helps to prevent the detection of rare and probably wrong HOIs. The lateral information comes from word embedding techniques. Furthermore, we propose a new feature aggregation method for aggregating extracted high-level features from video frames before feeding them to the classifier. We illustrate that this feature aggregation method is more effective for actions that include multiple subactions. We evaluated our system by recently introduced Charades challengeable dataset, which has lots of HOI categories in videos. We show that our proposed system can detect unseen HOI classes in addition to the acceptable recognition of seen types. Therefore, the number of classes identifiable by the system is greater than the number of classes used for training.

摘要

人类活动识别是计算机视觉中的一个重要领域。大多数人类活动都包含人与物体之间的交互。近年来,在人类-物体交互(HOI)识别方面已经取得了许多成功的成果,并取得了可接受的结果。然而,它们都是完全监督式的,需要为所有HOI训练标注数据。由于人类-物体交互的空间巨大,列出并为所有可能的类别提供训练数据既昂贵又不切实际。我们提出了一种通过零样本学习技术来扩展视频数据中人类-物体交互识别的方法,以解决这个问题。我们的方法从视频中识别动词和物体,并生成一个HOI类别。识别动词和物体而不是HOI可以识别动词和物体的新组合。因此,可以识别出识别系统未见过的新HOI类别。我们引入了一种能够理解和表示视频数据的神经网络架构。所提出的系统在训练阶段从可用的训练数据中学习动词和物体,并在测试时能够识别视频中的动词-物体对。因此,该系统可以用物体和动词的不同组合来识别HOI类别。此外,我们建议使用横向信息来组合动词和物体,以形成有效的动词-物体对。这有助于防止检测到罕见且可能错误的HOI。横向信息来自词嵌入技术。此外,我们提出了一种新的特征聚合方法,用于在将从视频帧中提取的高级特征输入分类器之前进行聚合。我们表明,这种特征聚合方法对于包含多个子动作的动作更有效。我们通过最近引入的具有挑战性的Charades数据集对我们的系统进行了评估,该数据集中的视频包含许多HOI类别。我们表明,我们提出的系统除了能够对已见过的类型进行可接受的识别外,还能检测到未见过的HOI类别。因此,系统可识别的类别数量大于用于训练的类别数量。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/de497d332a75/CIN2021-9922697.009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/6bcfddc672b6/CIN2021-9922697.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/4c6bc47f54e7/CIN2021-9922697.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/7664867bb331/CIN2021-9922697.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/2352e8fe2504/CIN2021-9922697.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/490b294069d4/CIN2021-9922697.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/e79541f435ba/CIN2021-9922697.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/684a26de2133/CIN2021-9922697.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/0fbe3c77c836/CIN2021-9922697.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/de497d332a75/CIN2021-9922697.009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/6bcfddc672b6/CIN2021-9922697.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/4c6bc47f54e7/CIN2021-9922697.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/7664867bb331/CIN2021-9922697.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/2352e8fe2504/CIN2021-9922697.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/490b294069d4/CIN2021-9922697.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/e79541f435ba/CIN2021-9922697.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/684a26de2133/CIN2021-9922697.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/0fbe3c77c836/CIN2021-9922697.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cdae/8211498/de497d332a75/CIN2021-9922697.009.jpg

相似文献

1
Scaling Human-Object Interaction Recognition in the Video through Zero-Shot Learning.通过零样本学习扩展视频中的人类-物体交互识别
Comput Intell Neurosci. 2021 Jun 9;2021:9922697. doi: 10.1155/2021/9922697. eCollection 2021.
2
Effects of Motion-Relevant Knowledge From Unlabeled Video to Human-Object Interaction Detection.运动相关知识对无标签视频中人类-物体交互检测的影响。
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5760-5773. doi: 10.1109/TNNLS.2021.3131154. Epub 2023 Sep 1.
3
Few-shot human-object interaction video recognition with transformers.基于 Transformer 的小样本人体目标交互视频识别。
Neural Netw. 2023 Jun;163:1-9. doi: 10.1016/j.neunet.2023.01.019. Epub 2023 Feb 10.
4
Few-Shot Human-Object Interaction Recognition With Semantic-Guided Attentive Prototypes Network.基于语义引导注意原型网络的少样本人体目标交互识别。
IEEE Trans Image Process. 2021;30:1648-1661. doi: 10.1109/TIP.2020.3046861. Epub 2021 Jan 11.
5
Multi-label zero-shot human action recognition via joint latent ranking embedding.基于联合潜在排序嵌入的多标签零镜头人体动作识别。
Neural Netw. 2020 Feb;122:1-23. doi: 10.1016/j.neunet.2019.09.029. Epub 2019 Oct 21.
6
Multi-label zero-shot learning with graph convolutional networks.基于图卷积网络的多标签零样本学习。
Neural Netw. 2020 Dec;132:333-341. doi: 10.1016/j.neunet.2020.09.010. Epub 2020 Sep 21.
7
Zero-Shot Human-Object Interaction Detection via Similarity Propagation.通过相似性传播实现零样本人类-物体交互检测
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17805-17816. doi: 10.1109/TNNLS.2023.3309104. Epub 2024 Dec 2.
8
Learning Human-Object Interaction via Interactive Semantic Reasoning.通过交互式语义推理学习人机交互。
IEEE Trans Image Process. 2021;30:9294-9305. doi: 10.1109/TIP.2021.3125258. Epub 2021 Nov 12.
9
Class-Incremental Learning on Video-Based Action Recognition by Distillation of Various Knowledge.基于视频的动作识别的类增量学习,通过各种知识的蒸馏。
Comput Intell Neurosci. 2022 Mar 24;2022:4879942. doi: 10.1155/2022/4879942. eCollection 2022.
10
Semantic-Aware Dynamic Generation Networks for Few-Shot Human-Object Interaction Recognition.用于少样本人类-物体交互识别的语义感知动态生成网络
IEEE Trans Neural Netw Learn Syst. 2024 Sep;35(9):12564-12575. doi: 10.1109/TNNLS.2023.3263660. Epub 2024 Sep 3.

引用本文的文献

1
Class-Incremental Learning on Video-Based Action Recognition by Distillation of Various Knowledge.基于视频的动作识别的类增量学习,通过各种知识的蒸馏。
Comput Intell Neurosci. 2022 Mar 24;2022:4879942. doi: 10.1155/2022/4879942. eCollection 2022.
2
Discriminative Codebook Hashing for Supervised Video Retrieval.基于判别式码本哈希的监督视频检索
Comput Intell Neurosci. 2021 Aug 25;2021:5845094. doi: 10.1155/2021/5845094. eCollection 2021.

本文引用的文献

1
3D convolutional neural networks for human action recognition.三维卷积神经网络的人体动作识别。
IEEE Trans Pattern Anal Mach Intell. 2013 Jan;35(1):221-31. doi: 10.1109/TPAMI.2012.59.
2
Observing human-object interactions: using spatial and functional compatibility for recognition.观察人与物体的交互:利用空间和功能兼容性进行识别。
IEEE Trans Pattern Anal Mach Intell. 2009 Oct;31(10):1775-89. doi: 10.1109/TPAMI.2009.83.