Suppr超能文献

StyleVR:使用归一化流对角色动画进行风格化处理。

StyleVR: Stylizing Character Animations With Normalizing Flows.

作者信息

Ji Bin, Pan Ye, Yan Yichao, Chen Ruizhao, Yang Xiaokang

出版信息

IEEE Trans Vis Comput Graph. 2024 Jul;30(7):4183-4196. doi: 10.1109/TVCG.2023.3259183. Epub 2024 Jun 27.

Abstract

The significance of artistry in creating animated virtual characters is widely acknowledged, and motion style is a crucial element in this process. There has been a long-standing interest in stylizing character animations with style transfer methods. However, this kind of models can only deal with short-term motions and yield deterministic outputs. To address this issue, we propose a generative model based on normalizing flows for stylizing long and aperiodic animations in the VR scene. Our approach breaks down this task into two sub-problems: motion style transfer and stylized motion generation, both formulated as the instances of conditional normalizing flows with multi-class latent space. Specifically, we encode high-frequency style features into the latent space for varied results and control the generation process with style-content labels for disentangled edits of style and content. We have developed a prototype, StyleVR, in Unity, which allows casual users to apply our method in VR. Through qualitative and quantitative comparisons, we demonstrate that our system outperforms other methods in terms of style transfer as well as stochastic stylized motion generation.

摘要

艺术istry在创建动画虚拟角色中的重要性已得到广泛认可,而运动风格是这一过程中的关键要素。长期以来,人们一直对使用风格迁移方法来风格化角色动画感兴趣。然而,这类模型只能处理短期运动并产生确定性输出。为了解决这个问题,我们提出了一种基于归一化流的生成模型,用于在VR场景中风格化长周期和非周期性动画。我们的方法将这个任务分解为两个子问题:运动风格迁移和风格化运动生成,两者都被表述为具有多类潜在空间的条件归一化流实例。具体来说,我们将高频风格特征编码到潜在空间中以获得多样化的结果,并使用风格-内容标签控制生成过程,以实现对风格和内容的解缠编辑。我们在Unity中开发了一个原型StyleVR,它允许普通用户在VR中应用我们的方法。通过定性和定量比较,我们证明我们的系统在风格迁移以及随机风格化运动生成方面优于其他方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验