Hu Qianshuo, Liu Haijun
School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China.
Biomimetics (Basel). 2024 Feb 20;9(3):123. doi: 10.3390/biomimetics9030123.
Skeleton-based human interaction recognition is a challenging task in the field of vision and image processing. Graph Convolutional Networks (GCNs) achieved remarkable performance by modeling the human skeleton as a topology. However, existing GCN-based methods have two problems: (1) Existing frameworks cannot effectively take advantage of the complementary features of different skeletal modalities. There is no information transfer channel between various specific modalities. (2) Limited by the structure of the skeleton topology, it is hard to capture and learn the information about two-person interactions. To solve these problems, inspired by the human visual neural network, we propose a multi-modal enhancement transformer (ME-Former) network for skeleton-based human interaction recognition. ME-Former includes a multi-modal enhancement module (ME) and a context progressive fusion block (CPF). More specifically, each ME module consists of a multi-head cross-modal attention block (MH-CA) and a two-person hypergraph self-attention block (TH-SA), which are responsible for enhancing the skeleton features of a specific modality from other skeletal modalities and modeling spatial dependencies between joints using the specific modality, respectively. In addition, we propose a two-person skeleton topology and a two-person hypergraph representation. The TH-SA block can embed their structural information into the self-attention to better learn two-person interaction. The CPF block is capable of progressively transforming the features of different skeletal modalities from low-level features to higher-order global contexts, making the enhancement process more efficient. Extensive experiments on benchmark NTU-RGB+D 60 and NTU-RGB+D 120 datasets consistently verify the effectiveness of our proposed ME-Former by outperforming state-of-the-art methods.
基于骨架的人体交互识别是视觉和图像处理领域一项具有挑战性的任务。图卷积网络(GCN)通过将人体骨架建模为一种拓扑结构取得了显著的性能。然而,现有的基于GCN的方法存在两个问题:(1)现有框架无法有效利用不同骨架模态的互补特征。各种特定模态之间没有信息传递通道。(2)受骨架拓扑结构的限制,很难捕捉和学习关于两人交互的信息。为了解决这些问题,受人类视觉神经网络的启发,我们提出了一种用于基于骨架的人体交互识别的多模态增强Transformer(ME-Former)网络。ME-Former包括一个多模态增强模块(ME)和一个上下文渐进融合块(CPF)。更具体地说,每个ME模块由一个多头跨模态注意力块(MH-CA)和一个两人超图自注意力块(TH-SA)组成,它们分别负责从其他骨架模态增强特定模态的骨架特征,并使用特定模态对关节之间的空间依赖性进行建模。此外,我们提出了一种两人骨架拓扑结构和一种两人超图表示。TH-SA块可以将它们的结构信息嵌入到自注意力中,以更好地学习两人交互。CPF块能够将不同骨架模态的特征从低级特征逐步转换为高阶全局上下文,使增强过程更加高效。在基准NTU-RGB+D 60和NTU-RGB+D 120数据集上进行的大量实验一致验证了我们提出的ME-Former的有效性,其性能优于现有方法。