Suppr超能文献

基于直接组合模型的面部特征点自动定位与面部素描合成

Automatic location of facial feature points and synthesis of facial sketches using direct combined model.

作者信息

Tu Ching-Ting, Lien Jenn-Jier James

机构信息

Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan.

出版信息

IEEE Trans Syst Man Cybern B Cybern. 2010 Aug;40(4):1158-69. doi: 10.1109/TSMCB.2009.2035154. Epub 2009 Nov 20.

Abstract

Automatically locating multiple feature points (i.e., the shape) in a facial image and then synthesizing the corresponding facial sketch are highly challenging since facial images typically exhibit a wide range of poses, expressions, and scales, and have differing degrees of illumination and/or occlusion. When the facial sketches are to be synthesized in the unique sketching style of a particular artist, the problem becomes even more complex. To resolve these problems, this paper develops an automatic facial sketch synthesis system based on a novel direct combined model (DCM) algorithm. The proposed system executes three cascaded procedures, namely, 1) synthesis of the facial shape from the input texture information (i.e., the facial image); 2) synthesis of the exaggerated facial shape from the synthesized facial shape; and 3) synthesis of a sketch from the original input image and the synthesized exaggerated shape. Previous proposals for reconstructing facial shapes and synthesizing the corresponding facial sketches are heavily reliant on the quality of the texture reconstruction results, which, in turn, are highly sensitive to occlusion and lighting effects in the input image. However, the DCM approach proposed in this paper accurately reconstructs the facial shape and then produces lifelike synthesized facial sketches without the need to recover occluded feature points or to restore the texture information lost as a result of unfavorable lighting conditions. Moreover, the DCM approach is capable of synthesizing facial sketches from input images with a wide variety of facial poses, gaze directions, and facial expressions even when such images are not included within the original training data set.

摘要

在面部图像中自动定位多个特征点(即形状),然后合成相应的面部素描极具挑战性,因为面部图像通常呈现出广泛的姿势、表情和尺度,并且具有不同程度的光照和/或遮挡。当要以特定艺术家独特的素描风格合成面部素描时,问题变得更加复杂。为了解决这些问题,本文基于一种新颖的直接组合模型(DCM)算法开发了一种自动面部素描合成系统。所提出的系统执行三个级联过程,即:1)从输入纹理信息(即面部图像)合成面部形状;2)从合成的面部形状合成夸张的面部形状;3)从原始输入图像和合成的夸张形状合成素描。先前关于重建面部形状和合成相应面部素描的提议严重依赖于纹理重建结果的质量,而纹理重建结果又对输入图像中的遮挡和光照效果高度敏感。然而,本文提出的DCM方法能够准确地重建面部形状,然后生成逼真的合成面部素描,而无需恢复被遮挡的特征点或恢复因不利光照条件而丢失的纹理信息。此外,即使输入图像不包含在原始训练数据集中,DCM方法也能够从具有各种面部姿势、注视方向和面部表情的输入图像中合成面部素描。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验