Suppr超能文献

生成模型增强的人体运动预测

Generative model-enhanced human motion prediction.

作者信息

Bourached Anthony, Griffiths Ryan-Rhys, Gray Robert, Jha Ashwani, Nachev Parashkev

机构信息

Department of Neurology University College London London UK.

Department of Physics University of Cambridge Cambridge UK.

出版信息

Appl AI Lett. 2022 Apr;3(2):e63. doi: 10.1002/ail2.63. Epub 2022 Mar 23.

Abstract

The task of predicting human motion is complicated by the natural heterogeneity and compositionality of actions, necessitating robustness to distributional shifts as far as out-of-distribution (OoD). Here, we formulate a new OoD benchmark based on the Human3.6M and Carnegie Mellon University (CMU) motion capture datasets, and introduce a hybrid framework for hardening discriminative architectures to OoD failure by augmenting them with a generative model. When applied to current state-of-the-art discriminative models, we show that the proposed approach improves OoD robustness without sacrificing in-distribution performance, and can theoretically facilitate model interpretability. We suggest human motion predictors ought to be constructed with OoD challenges in mind, and provide an extensible general framework for hardening diverse discriminative architectures to extreme distributional shift. The code is available at: https://github.com/bouracha/OoDMotion.

摘要

预测人类运动的任务因动作的自然异质性和组合性而变得复杂,这就要求模型对于分布变化具有鲁棒性,甚至对于分布外(OoD)情况也要如此。在此,我们基于Human3.6M和卡内基梅隆大学(CMU)的动作捕捉数据集制定了一个新的分布外基准,并引入了一个混合框架,通过用生成模型增强判别式架构来使其对分布外失效具有更强的抵抗力。当应用于当前的最先进判别式模型时,我们表明所提出的方法在不牺牲分布内性能的情况下提高了分布外鲁棒性,并且在理论上有助于模型的可解释性。我们建议在构建人类运动预测器时应考虑到分布外挑战,并提供一个可扩展的通用框架,以使各种判别式架构能够应对极端的分布变化。代码可在以下网址获取:https://github.com/bouracha/OoDMotion

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c4/9159682/f61c379bed32/AIL2-3-e63-g007.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验