Chai Zenghao, Tang Chen, Wong Yongkang, Kankanhalli Mohan
IEEE Trans Vis Comput Graph. 2025 Apr 11;PP. doi: 10.1109/TVCG.2025.3559988.
The creation of 4D avatars (i.e., animated 3D avatars) from text description typically uses text-to-image (T2I) diffusion models to synthesize 3D avatars in the canonical space and subsequently animates them with target motions. However, such an optimization-by-animation paradigm has several drawbacks. (1) For pose-agnostic optimization, the rendered images in canonical pose for na¨ıve Score Distillation Sampling (SDS) exhibit domain gap and cannot preserve view-consistency using only T2I priors, and (2) For post hoc animation, simply applying the source motions to target 3D avatars leads to translation artifacts and misalignment. To address these issues, we propose Skeletonaware Text-based 4D Avatar generation with in-network motion Retargeting (STAR). STAR considers the geometry and skeleton differences between the template mesh and target avatar, and corrects the mismatched source motion by resorting to the pretrained motion retargeting techniques. With the informatively retargeted and occlusion-aware skeleton, we embrace the skeleton-conditioned T2I and text-to-video (T2V) priors, and propose a hybrid SDS module to coherently provide multiview and frame-consistent supervision signals. Hence, STAR can progressively optimize the geometry, texture, and motion in an end-to-end manner. The quantitative and qualitative experiments demonstrate our proposed STAR can synthesize high-quality 4D avatars with vivid animations that align well with the text description. Additional ablation studies show the contributions of each component in STAR. The source code and demos are available at: https://star-avatar.github.io.
从文本描述创建4D虚拟人(即动画3D虚拟人)通常使用文本到图像(T2I)扩散模型在规范空间中合成3D虚拟人,随后用目标运动对其进行动画处理。然而,这种基于动画的优化范式存在几个缺点。(1)对于与姿势无关的优化,朴素得分蒸馏采样(SDS)在规范姿势下渲染的图像存在领域差距,仅使用T2I先验无法保持视图一致性;(2)对于事后动画处理,简单地将源运动应用于目标3D虚拟人会导致平移伪影和错位。为了解决这些问题,我们提出了基于骨架感知文本的4D虚拟人生成与网络内运动重定向(STAR)方法。STAR考虑了模板网格和目标虚拟人之间的几何和骨架差异,并借助预训练的运动重定向技术纠正不匹配的源运动。借助信息丰富的重定向和遮挡感知骨架,我们采用了骨架条件T2I和文本到视频(T2V)先验,并提出了一个混合SDS模块,以连贯地提供多视图和帧一致的监督信号。因此,STAR可以以端到端的方式逐步优化几何、纹理和运动。定量和定性实验表明,我们提出的STAR可以合成高质量的4D虚拟人,其生动的动画与文本描述高度契合。额外的消融研究展示了STAR中每个组件的贡献。源代码和演示可在以下网址获取:https://star-avatar.github.io 。