Chen Hong, Zhu Song-Chun
Department of Statistics and Computer Science, University of California, 8125 Math Science Building, Box 951554, Los Angeles, CA 90095, USA.
IEEE Trans Pattern Anal Mach Intell. 2006 Jul;28(7):1025-40. doi: 10.1109/TPAMI.2006.131.
In this paper, we present a generative sketch model for human hair analysis and synthesis. We treat hair images as 2D piecewise smooth vector (flow) fields and, thus, our representation is view-based in contrast to the physically-based 3D hair models in graphics. The generative model has three levels. The bottom level is the high-frequency band of the hair image. The middle level is a piecewise smooth vector field for the hair orientation, gradient strength, and growth directions. The top level is an attribute sketch graph for representing the discontinuities in the vector field. A sketch graph typically has a number of sketch curves which are divided into 11 types of directed primitives. Each primitive is a small window (say 5 x 7 pixels) where the orientations and growth directions are defined in parametric forms, for example, hair boundaries, occluding lines between hair strands, dividing lines on top of the hair, etc. In addition to the three level representation, we model the shading effects, i.e., the low-frequency band of the hair image, by a linear superposition of some Gaussian image bases and we encode the hair color by a color map. The inference algorithm is divided into two stages: 1) We compute the undirected orientation field and sketch graph from an input image and 2) we compute the hair growth direction forthe sketch curves and the orientation field using a Swendsen-Wang cut algorithm. Both steps maximize a joint Bayesian posterior probability. The generative model provides a straightforward way for synthesizing realistic hair images and stylistic drawings (rendering) from a sketch graph and a few Gaussian bases. The latter can be either inferred from a real hair image or input (edited) manually using a simple sketching interface. We test our algorithm on a large data set of hair images with diverse hair styles. Analysis, synthesis, and rendering results are reported in the experiments.
在本文中,我们提出了一种用于人类头发分析与合成的生成式草图模型。我们将头发图像视为二维分段光滑向量(流)场,因此,与图形学中基于物理的三维头发模型相比,我们的表示是基于视图的。生成模型有三个层次。底层是头发图像的高频带。中间层是用于头发方向、梯度强度和生长方向的分段光滑向量场。顶层是用于表示向量场中不连续性的属性草图图形。草图图形通常有许多草图曲线,这些曲线被分为11种有向基元类型。每个基元是一个小窗口(例如5×7像素),其中方向和生长方向以参数形式定义,例如头发边界、发丝之间的遮挡线、头发顶部的分界线等。除了三级表示之外,我们通过一些高斯图像基的线性叠加来对阴影效果(即头发图像的低频带)进行建模,并通过颜色映射对头发颜色进行编码。推理算法分为两个阶段:1)我们从输入图像计算无向方向场和草图图形;2)我们使用斯文森 - 王切割算法为草图曲线和方向场计算头发生长方向。这两个步骤都最大化联合贝叶斯后验概率。生成模型提供了一种直接的方法,可从草图图形和一些高斯基合成逼真的头发图像和风格化绘图(渲染)。后者既可以从真实头发图像中推断出来,也可以使用简单的草图界面手动输入(编辑)。我们在具有各种发型的大量头发图像数据集上测试了我们的算法。实验中报告了分析、合成和渲染结果。