Li Zongying, Wang Yong, Du Xin, Wang Can, Koch Reinhard, Liu Mengyuan
School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China.
Multimedia Information Processing Laboratory at the Department of Computer Science, Kiel University, Kiel, Germany.
Cyborg Bionic Syst. 2024 Feb 6;5:0090. doi: 10.34133/cbsystems.0090. eCollection 2024.
Extensive research has explored human motion generation, but the generated sequences are influenced by different motion styles. For instance, the act of walking with joy and sorrow evokes distinct effects on a character's motion. Due to the difficulties in motion capture with styles, the available data for style research are also limited. To address the problems, we propose ASMNet, an action and style-conditioned motion generative network. This network ensures that the generated human motion sequences not only comply with the provided action label but also exhibit distinctive stylistic features. To extract motion features from human motion sequences, we design a spatial temporal extractor. Moreover, we use the adaptive instance normalization layer to inject style into the target motion. Our results are comparable to state-of-the-art approaches and display a substantial advantage in both quantitative and qualitative evaluations. The code is available at https://github.com/ZongYingLi/ASMNet.git.
大量研究探索了人类运动生成,但生成的序列会受到不同运动风格的影响。例如,带着喜悦和悲伤行走的行为会对角色的运动产生不同的影响。由于难以捕捉带有风格的运动,用于风格研究的可用数据也很有限。为了解决这些问题,我们提出了ASMNet,一种基于动作和风格条件的运动生成网络。该网络确保生成的人类运动序列不仅符合提供的动作标签,还展现出独特的风格特征。为了从人类运动序列中提取运动特征,我们设计了一个时空提取器。此外,我们使用自适应实例归一化层将风格注入到目标运动中。我们的结果与最先进的方法相当,并且在定量和定性评估中都显示出显著优势。代码可在https://github.com/ZongYingLi/ASMNet.git获取。