Lu Xiaoguang, Jain Anil K, Colbry Dirk
Department of Computer Science and Engineering, Michigan State University, 3115 Engineering Building, East Lansing, MI 48824, USA.
IEEE Trans Pattern Anal Mach Intell. 2006 Jan;28(1):31-43. doi: 10.1109/TPAMI.2006.15.
The performance of face recognition systems that use two-dimensional images depends on factors such as lighting and subject's pose. We are developing a face recognition system that utilizes three-dimensional shape information to make the system more robust to arbitrary pose and lighting. For each subject, a 3D face model is constructed by integrating several 2.5D face scans which are captured from different views. 2.5D is a simplified 3D (x, y, z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. Two different modalities provided by the facial scan, namely, shape and texture, are utilized and integrated for face matching. The recognition engine consists of two components, surface matching and appearance-based matching. The surface matching component is based on a modified Iterative Closest Point (ICP) algorithm. The candidate list from the gallery used for appearance matching is dynamically generated based on the output of the surface matching component, which reduces the complexity of the appearance-based matching stage. Three-dimensional models in the gallery are used to synthesize new appearance samples with pose and illumination variations and the synthesized face images are used in discriminant subspace analysis. The weighted sum rule is applied to combine the scores given by the two matching components. Experimental results are given for matching a database of 200 3D face models with 598 2.5D independent test scans acquired under different pose and some lighting and expression changes. These results show the feasibility of the proposed matching scheme.
使用二维图像的人脸识别系统的性能取决于光照和主体姿势等因素。我们正在开发一种利用三维形状信息的人脸识别系统,以使该系统对任意姿势和光照具有更强的鲁棒性。对于每个主体,通过整合从不同视角捕获的多个2.5D面部扫描来构建3D面部模型。2.5D是一种简化的三维(x,y,z)表面表示,对于(x,y)平面中的每个点,最多包含一个深度值(z方向)。面部扫描提供的两种不同模态,即形状和纹理,被用于并整合以进行面部匹配。识别引擎由两个组件组成,表面匹配和基于外观的匹配。表面匹配组件基于一种改进的迭代最近点(ICP)算法。用于外观匹配的图库中的候选列表是根据表面匹配组件的输出动态生成的,这降低了基于外观的匹配阶段的复杂度。图库中的三维模型用于合成具有姿势和光照变化的新外观样本,并且合成的面部图像用于判别子空间分析。应用加权和规则来组合两个匹配组件给出的分数。给出了针对200个3D面部模型的数据库与在不同姿势以及一些光照和表情变化下获取的598个2.5D独立测试扫描进行匹配的实验结果。这些结果表明了所提出的匹配方案的可行性。