Yi Jizheng, Mao Xia, Chen Lijiang, Xue Yuli, Rovetta Alberto, Caleanu Catalin-Daniel
School of Electronic and Information Engineering, Beihang University, Beijing, 100191, China.
Department of Mechanics, Polytechnic University of Milan, Milan, 20156, Italy.
PLoS One. 2015 Apr 23;10(4):e0122200. doi: 10.1371/journal.pone.0122200. eCollection 2015.
Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.
用于人脸识别和面部表情识别的人脸图像光照归一化是图像处理中最常见且困难的问题之一。为了获得具有正常光照的人脸图像,我们的方法首先将输入的人脸图像划分为16个局部区域,并计算每个区域的边缘水平百分比。其次,选择三个满足较低复杂度和较大平均灰度值要求的局部区域,根据测量强度与计算强度之间的误差函数以及无限光源模型的约束函数来计算最终的光照方向。在知道输入人脸图像的最终光照方向后,从两个方面对Retinex算法进行改进:(1)优化环绕函数;(2)截取人脸图像直方图两端的值,确定灰度级范围,并将灰度级范围拉伸到显示设备的动态范围内。最后,实现光照归一化并得到最终的人脸图像。与以往的光照归一化方法不同,本文提出的方法不需要任何训练步骤或任何关于3D人脸和反射表面模型的知识。使用扩展耶鲁人脸数据库B和CMU - PIE的实验结果表明,与现有技术相比,我们的方法实现了更好的归一化效果。