Shen Yuefan, Zhang Changgeng, Fu Hongbo, Zhou Kun, Zheng Youyi
IEEE Trans Vis Comput Graph. 2021 Jul;27(7):3250-3263. doi: 10.1109/TVCG.2020.2968433. Epub 2021 May 27.
We present DeepSketchHair, a deep learning based tool for modeling of 3D hair from 2D sketches. Given a 3D bust model as reference, our sketching system takes as input a user-drawn sketch (consisting of hair contour and a few strokes indicating the hair growing direction within a hair region), and automatically generates a 3D hair model, matching the input sketch. The key enablers of our system are three carefully designed neural networks, namely, S2ONet, which converts an input sketch to a dense 2D hair orientation field; O2VNet, which maps the 2D orientation field to a 3D vector field; and V2VNet, which updates the 3D vector field with respect to the new sketches, enabling hair editing with additional sketches in new views. All the three networks are trained with synthetic data generated from a 3D hairstyle database. We demonstrate the effectiveness and expressiveness of our tool using a variety of hairstyles and also compare our method with prior art.
我们展示了DeepSketchHair,这是一种基于深度学习的工具,用于从二维草图对三维头发进行建模。给定一个三维半身模型作为参考,我们的草图绘制系统将用户绘制的草图(由头发轮廓和一些指示头发区域内头发生长方向的笔触组成)作为输入,并自动生成一个与输入草图匹配的三维头发模型。我们系统的关键促成因素是三个精心设计的神经网络,即S2ONet,它将输入草图转换为密集的二维头发方向场;O2VNet,它将二维方向场映射到三维向量场;以及V2VNet,它根据新草图更新三维向量场,从而能够在新视图中使用附加草图进行头发编辑。所有这三个网络都使用从三维发型数据库生成的合成数据进行训练。我们使用各种发型展示了我们工具的有效性和表现力,并将我们的方法与现有技术进行了比较。