• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

ASMNet:用于3D人体运动生成的动作与风格条件运动生成网络

ASMNet: Action and Style-Conditioned Motion Generative Network for 3D Human Motion Generation.

作者信息

Li Zongying, Wang Yong, Du Xin, Wang Can, Koch Reinhard, Liu Mengyuan

机构信息

School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China.

Multimedia Information Processing Laboratory at the Department of Computer Science, Kiel University, Kiel, Germany.

出版信息

Cyborg Bionic Syst. 2024 Feb 6;5:0090. doi: 10.34133/cbsystems.0090. eCollection 2024.

DOI:10.34133/cbsystems.0090
PMID:38348153
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10860709/
Abstract

Extensive research has explored human motion generation, but the generated sequences are influenced by different motion styles. For instance, the act of walking with joy and sorrow evokes distinct effects on a character's motion. Due to the difficulties in motion capture with styles, the available data for style research are also limited. To address the problems, we propose ASMNet, an action and style-conditioned motion generative network. This network ensures that the generated human motion sequences not only comply with the provided action label but also exhibit distinctive stylistic features. To extract motion features from human motion sequences, we design a spatial temporal extractor. Moreover, we use the adaptive instance normalization layer to inject style into the target motion. Our results are comparable to state-of-the-art approaches and display a substantial advantage in both quantitative and qualitative evaluations. The code is available at https://github.com/ZongYingLi/ASMNet.git.

摘要

大量研究探索了人类运动生成,但生成的序列会受到不同运动风格的影响。例如,带着喜悦和悲伤行走的行为会对角色的运动产生不同的影响。由于难以捕捉带有风格的运动,用于风格研究的可用数据也很有限。为了解决这些问题,我们提出了ASMNet,一种基于动作和风格条件的运动生成网络。该网络确保生成的人类运动序列不仅符合提供的动作标签,还展现出独特的风格特征。为了从人类运动序列中提取运动特征,我们设计了一个时空提取器。此外,我们使用自适应实例归一化层将风格注入到目标运动中。我们的结果与最先进的方法相当,并且在定量和定性评估中都显示出显著优势。代码可在https://github.com/ZongYingLi/ASMNet.git获取。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb28/10860709/cc7cc459b5aa/cbsystems.0090.fig.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb28/10860709/e29deb4daf0b/cbsystems.0090.fig.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb28/10860709/5781cd79c0bc/cbsystems.0090.fig.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb28/10860709/d203d2f238c1/cbsystems.0090.fig.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb28/10860709/a607e0d6f409/cbsystems.0090.fig.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb28/10860709/333402d5c1e9/cbsystems.0090.fig.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb28/10860709/cc7cc459b5aa/cbsystems.0090.fig.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb28/10860709/e29deb4daf0b/cbsystems.0090.fig.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb28/10860709/5781cd79c0bc/cbsystems.0090.fig.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb28/10860709/d203d2f238c1/cbsystems.0090.fig.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb28/10860709/a607e0d6f409/cbsystems.0090.fig.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb28/10860709/333402d5c1e9/cbsystems.0090.fig.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb28/10860709/cc7cc459b5aa/cbsystems.0090.fig.006.jpg

相似文献

1
ASMNet: Action and Style-Conditioned Motion Generative Network for 3D Human Motion Generation.ASMNet:用于3D人体运动生成的动作与风格条件运动生成网络
Cyborg Bionic Syst. 2024 Feb 6;5:0090. doi: 10.34133/cbsystems.0090. eCollection 2024.
2
FineStyle: Semantic-Aware Fine-Grained Motion Style Transfer with Dual Interactive-Flow Fusion.FineStyle:基于双交互流融合的语义感知细粒度运动风格迁移
IEEE Trans Vis Comput Graph. 2023 Nov;29(11):4361-4371. doi: 10.1109/TVCG.2023.3320216. Epub 2023 Nov 2.
3
TalkingStyle: Personalized Speech-Driven 3D Facial Animation with Style Preservation.谈话风格:具有风格保留的个性化语音驱动3D面部动画
IEEE Trans Vis Comput Graph. 2024 Jun 11;PP. doi: 10.1109/TVCG.2024.3409568.
4
Real-time stylistic prediction for whole-body human motions.实时全身人体动作的风格预测。
Neural Netw. 2012 Jan;25(1):191-9. doi: 10.1016/j.neunet.2011.08.008. Epub 2011 Sep 8.
5
Self-Supervised Motion Perception for Spatiotemporal Representation Learning.用于时空表征学习的自监督运动感知
IEEE Trans Neural Netw Learn Syst. 2023 Dec;34(12):9832-9846. doi: 10.1109/TNNLS.2022.3160860. Epub 2023 Nov 30.
6
Progressively Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation.具有自适应层实例归一化的渐进式无监督生成注意力网络用于图像到图像的翻译
Sensors (Basel). 2023 Aug 1;23(15):6858. doi: 10.3390/s23156858.
7
Domain Generalization with Correlated Style Uncertainty.具有相关风格不确定性的领域泛化
IEEE Winter Conf Appl Comput Vis. 2024 Jan;2024:1989-1998. doi: 10.1109/wacv57701.2024.00200. Epub 2024 Apr 9.
8
A Progressive Fusion Generative Adversarial Network for Realistic and Consistent Video Super-Resolution.一种用于逼真且连贯视频超分辨率的渐进式融合生成对抗网络。
IEEE Trans Pattern Anal Mach Intell. 2022 May;44(5):2264-2280. doi: 10.1109/TPAMI.2020.3042298. Epub 2022 Apr 1.
9
IAS-NET: Joint intraclassly adaptive GAN and segmentation network for unsupervised cross-domain in neonatal brain MRI segmentation.IAS-NET:用于新生儿脑 MRI 分割的无监督跨领域的联合类内自适应 GAN 和分割网络。
Med Phys. 2021 Nov;48(11):6962-6975. doi: 10.1002/mp.15212. Epub 2021 Sep 25.
10
Normalization of HE-stained histological images using cycle consistent generative adversarial networks.使用循环一致生成对抗网络对 HE 染色组织学图像进行归一化。
Diagn Pathol. 2021 Aug 6;16(1):71. doi: 10.1186/s13000-021-01126-y.

引用本文的文献

1
Application of style transfer algorithm in the integration of traditional garden and modern design elements.风格迁移算法在传统园林与现代设计元素融合中的应用。
PLoS One. 2024 Dec 5;19(12):e0313909. doi: 10.1371/journal.pone.0313909. eCollection 2024.

本文引用的文献

1
MotionDiffuse: Text-Driven Human Motion Generation With Diffusion Model.MotionDiffuse:基于扩散模型的文本驱动人体运动生成
IEEE Trans Pattern Anal Mach Intell. 2024 Jun;46(6):4115-4128. doi: 10.1109/TPAMI.2024.3355414. Epub 2024 May 7.
2
Dual-Hand Motion Capture by Using Biological Inspiration for Bionic Bimanual Robot Teleoperation.基于生物启发的仿生双臂机器人遥操作双手运动捕捉
Cyborg Bionic Syst. 2023 Sep 13;4:0052. doi: 10.34133/cbsystems.0052. eCollection 2023.
3
DTCM: Joint Optimization of Dark Enhancement and Action Recognition in Videos.
深度时态对比学习:视频中暗部增强与动作识别的联合优化
IEEE Trans Image Process. 2023;32:3507-3520. doi: 10.1109/TIP.2023.3286254. Epub 2023 Jun 23.
4
Design and Control for WLR-3P: A Hydraulic Wheel-Legged Robot.WLR-3P的设计与控制:一种液压轮腿式机器人。
Cyborg Bionic Syst. 2023 Jun 8;4:0025. doi: 10.34133/cbsystems.0025. eCollection 2023.
5
Generalized Pose Decoupled Network for Unsupervised 3D Skeleton Sequence-Based Action Representation Learning.用于基于无监督3D骨架序列的动作表示学习的广义姿态解耦网络。
Cyborg Bionic Syst. 2022;2022:0002. doi: 10.34133/cbsystems.0002. Epub 2022 Dec 30.
6
Human Somatosensory Processing and Artificial Somatosensation.人类体感处理与人工体感
Cyborg Bionic Syst. 2021 Jul 2;2021:9843259. doi: 10.34133/2021/9843259. eCollection 2021.
7
Rhythm is a Dancer: Music-Driven Motion Synthesis With Global Structure.节奏是舞者:具有全局结构的音乐驱动运动合成。
IEEE Trans Vis Comput Graph. 2023 Aug;29(8):3519-3534. doi: 10.1109/TVCG.2022.3163676. Epub 2023 Jun 29.
8
Fast Neural Style Transfer for Motion Data.用于运动数据的快速神经风格迁移
IEEE Comput Graph Appl. 2017;37(4):42-49. doi: 10.1109/MCG.2017.3271464.