• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

FGAHOI:用于人类与物体交互检测的细粒度锚点

FGAHOI: Fine-Grained Anchors for Human-Object Interaction Detection.

作者信息

Ma Shuailei, Wang Yuefeng, Wang Shanze, Wei Ying

出版信息

IEEE Trans Pattern Anal Mach Intell. 2024 Apr;46(4):2415-2429. doi: 10.1109/TPAMI.2023.3331738. Epub 2024 Mar 6.

DOI:10.1109/TPAMI.2023.3331738
PMID:37948147
Abstract

Human-Object Interaction (HOI), as an important problem in computer vision, requires locating the human-object pair and identifying the interactive relationships between them. The HOI instance has a greater span in spatial, scale, and task than the individual object instance, making its detection more susceptible to noisy backgrounds. To alleviate the disturbance of noisy backgrounds on HOI detection, it is necessary to consider the input image information to generate fine-grained anchors which are then leveraged to guide the detection of HOI instances. However, it has the following challenges. i) how to extract pivotal features from the images with complex background information is still an open question. ii) how to semantically align the extracted features and query embeddings is also a difficult issue. In this paper, a novel end-to-end transformer-based framework (FGAHOI) is proposed to alleviate the above problems. FGAHOI comprises three dedicated components namely, multi-scale sampling (MSS), hierarchical spatial-aware merging (HSAM) and task-aware merging mechanism (TAM). MSS extracts features of humans, objects and interaction areas from noisy backgrounds for HOI instances of various scales. HSAM and TAM semantically align and merge the extracted features and query embeddings in the hierarchical spatial and task perspectives in turn. In the meanwhile, a novel training strategy Stage-wise Training Strategy is designed to reduce the training pressure caused by overly complex tasks done by FGAHOI. In addition, we propose two ways to measure the difficulty of HOI detection and a novel dataset, i.e., HOI-SDC for the two challenges (Uneven Distributed Area in Human-Object Pairs and Long Distance Visual Modeling of Human-Object Pairs) of HOI instances detection. Experiments are conducted on three benchmarks: HICO-DET, HOI-SDC and V-COCO. Our model outperforms the state-of-the-art HOI detection methods, and the extensive ablations reveal the merits of our proposed contribution.

摘要

人机交互(HOI)作为计算机视觉中的一个重要问题,需要定位人机对并识别它们之间的交互关系。与单个物体实例相比,HOI实例在空间、尺度和任务方面具有更大的跨度,这使得其检测更容易受到嘈杂背景的影响。为了减轻嘈杂背景对HOI检测的干扰,有必要考虑输入图像信息以生成细粒度锚点,然后利用这些锚点来指导HOI实例的检测。然而,这存在以下挑战。i) 如何从具有复杂背景信息的图像中提取关键特征仍然是一个悬而未决的问题。ii) 如何在语义上对齐提取的特征和查询嵌入也是一个难题。本文提出了一种基于端到端变压器的新型框架(FGAHOI)来缓解上述问题。FGAHOI由三个专用组件组成,即多尺度采样(MSS)、分层空间感知合并(HSAM)和任务感知合并机制(TAM)。MSS从嘈杂背景中提取各种尺度的HOI实例的人类、物体和交互区域的特征。HSAM和TAM依次从分层空间和任务角度在语义上对齐并合并提取的特征和查询嵌入。同时,设计了一种新颖的训练策略——阶段式训练策略,以减轻FGAHOI执行的过于复杂的任务所带来的训练压力。此外,我们提出了两种方法来衡量HOI检测的难度,并提出了一个新颖的数据集,即用于HOI实例检测的两个挑战(人机对中不均匀分布区域和人机对的长距离视觉建模)的HOI-SDC。在三个基准上进行了实验:HICO-DET、HOI-SDC和V-COCO。我们的模型优于现有的HOI检测方法,广泛的消融实验揭示了我们所提出贡献的优点。

相似文献

1
FGAHOI: Fine-Grained Anchors for Human-Object Interaction Detection.FGAHOI:用于人类与物体交互检测的细粒度锚点
IEEE Trans Pattern Anal Mach Intell. 2024 Apr;46(4):2415-2429. doi: 10.1109/TPAMI.2023.3331738. Epub 2024 Mar 6.
2
A Novel Part Refinement Tandem Transformer for Human-Object Interaction Detection.一种用于人机交互检测的新型部件细化串联变压器。
Sensors (Basel). 2024 Jul 1;24(13):4278. doi: 10.3390/s24134278.
3
Transferable Interactiveness Knowledge for Human-Object Interaction Detection.可迁移交互知识用于人机交互检测。
IEEE Trans Pattern Anal Mach Intell. 2022 Jul;44(7):3870-3882. doi: 10.1109/TPAMI.2021.3054048. Epub 2022 Jun 3.
4
Learning Human-Object Interaction via Interactive Semantic Reasoning.通过交互式语义推理学习人机交互。
IEEE Trans Image Process. 2021;30:9294-9305. doi: 10.1109/TIP.2021.3125258. Epub 2021 Nov 12.
5
Cascaded Parsing of Human-Object Interaction Recognition.级联解析的人机交互识别。
IEEE Trans Pattern Anal Mach Intell. 2022 Jun;44(6):2827-2840. doi: 10.1109/TPAMI.2021.3049156. Epub 2022 May 5.
6
Point-Based Learnable Query Generator for Human-Object Interaction Detection.用于人机交互检测的基于点的可学习查询生成器
IEEE Trans Image Process. 2023;32:6469-6484. doi: 10.1109/TIP.2023.3334100. Epub 2023 Dec 1.
7
Human-Object Interaction detection via Global Context and Pairwise-level Fusion Features Integration.基于全局上下文和对级别融合特征集成的人与对象交互检测。
Neural Netw. 2024 Feb;170:242-253. doi: 10.1016/j.neunet.2023.11.002. Epub 2023 Nov 13.
8
Zero-Shot Human-Object Interaction Detection via Similarity Propagation.通过相似性传播实现零样本人类-物体交互检测
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17805-17816. doi: 10.1109/TNNLS.2023.3309104. Epub 2024 Dec 2.
9
ERNet: An Efficient and Reliable Human-Object Interaction Detection Network.ERNet:一种高效可靠的人-物交互检测网络。
IEEE Trans Image Process. 2023;32:964-979. doi: 10.1109/TIP.2022.3231528.
10
Toward a Unified Transformer-Based Framework for Scene Graph Generation and Human-Object Interaction Detection.面向场景图生成和人机交互检测的统一基于 Transformer 的框架。
IEEE Trans Image Process. 2023;32:6274-6288. doi: 10.1109/TIP.2023.3330304. Epub 2023 Nov 20.