An illumination normalization method for face recognition has been developed since it was difficult to control lighting conditions efficiently in the practical applications. Considering that the irradiation light is of little variation in a certain area, a mean estimation method is used to simulate the illumination component of a face image. Illumination component is removed by subtracting the mean estimation from the original image. In order to highlight face texture features and suppress the impact of adjacent domains, a ratio of the quotient image and its modulus mean value is obtained. The exponent result of the ratio is closely approximate to a relative reflection component. Since the gray value of facial organs is less than that of the facial skin, postprocessing is applied to the images in order to highlight facial texture for face recognition. Experiments show that the performance by using the proposed method is superior to that of state of the arts. 1. Introduction Face recognition is one of the most active research focuses due to its wide applications [1, 2]. Face recognition can be used in many areas including security access control, surveillance monitor, and intelligent human machine interface. The accuracy of face recognition is not ideal at present. Among numerous adverse factors for face recognition, appearance variation caused by illumination is one of the major problems which remain unsettled. Many approaches such as illumination cones method [3] and 9D linear subspace method [4] have been proposed to solve illumination problems and improve the face recognition. The main drawbacks of the approaches mentioned above are the need of knowledge about the light source or a large volume of training data. To overcome this demerit, region-based image preprocessing methods are proposed in [5–7]. These methods introduced some noise to make global illumination discontinuous. Some illumination normalization methods are proposed to deal with the problem of varied lighting, which does not require training images with low computational complexity [8, 9], such as multiscale retinex (MSR) [10], wavelet-based normalization technique (WA) [11], and DCT-based normalization technique (DCT) [12]. The extracted facial feature is poor and has messy histogram. In order to highlight facial texture features, some methods are proposed including adaptive nonlocal means (ANL) [13], DoG filtering (DOG) [14], steerable filter (SF) [15], and wavelet denoising (WD) [16]. Overall gray values after processing are different with varying degrees and partial facial feature
References
[1]
J. Lu, Y. P. Tan, and G. Wang, “Discriminative multi-manifold analysis for face recognition from a single training sample per person,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 39–51, 2013.
[2]
M. Matsumoto, “Cognition-based parameter setting of non-linear filters using a face recognition system,” IET Image Processing, vol. 6, no. 8, pp. 1057–1063, 2012.
[3]
J. Ho, M.-H. Yang, J. Lim, K.-C. Lee, and D. Kriegman, “Clustering appearances of objects under varying illumination conditions,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 11–18, Madison, Wis, USA, June 2003.
[4]
R. Basri and D. W. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 2, pp. 218–233, 2003.
[5]
G. An, J. Wu, and Q. Ruan, “An illumination normalization model for face recognition under varied lighting conditions,” Pattern Recognition Letters, vol. 31, no. 9, pp. 1056–1067, 2010.
[6]
S. Du and R. K. Ward, “Adaptive region-based image enhancement method for robust face recognition under variable illumination conditions,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 9, pp. 1165–1175, 2010.
[7]
P.-C. Hsieh and P.-C. Tung, “Illumination-robust face recognition using an efficient mirror technique,” in Proceedings of the 2nd International Congress on Image and Signal Processing (CISP '09), pp. 1–5, Taiwan, China, October 2009.
[8]
V. ?truc and N. Pave?ic, “Photometric normalization techniques for illumination invariance,” in Advances in Face Image Analysis: Techniques and Technologies, Y. J. Zhang, Ed., pp. 279–300, IGI Global, 2011.
[9]
V. Truc and N. Pavei?, “Gabor-based kernel partial-least-squares discrimination features for face recognition,” Informatica, vol. 20, no. 1, pp. 115–138, 2009.
[10]
D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Transactions on Image Processing, vol. 6, no. 7, pp. 965–976, 1997.
[11]
S. Du and R. Ward, “Wavelet-based illumination normalization for face recognition,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '05), vol. 2, pp. 954–957, Genova, Italy, September 2005.
[12]
W. Chen, M. J. Er, and S. Wu, “Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domain,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 36, no. 2, pp. 458–466, 2006.
[13]
V. ?truc and N. Pave?ic, Illumination Invariant Face Recognition by Non-Local Smoothing, Springer, Heidelberg, Germany, 2009.
[14]
S. Nilufar, N. Ray, and H. Zhang, “Object detection with DoG scale-space: a multiple kernel learning approach,” IEEE Transactions on Image Processing, vol. 21, no. 8, pp. 3744–3756, 2012.
[15]
W. T. Freeman and E. H. Adelson, “The design and use of steerable filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 9, pp. 891–906, 1991.
[16]
T. Zhang, B. Fang, Y. Yuan et al., “Multiscale facial structure representation for face recognition under varying illumination,” Pattern Recognition, vol. 42, no. 2, pp. 251–258, 2009.
[17]
N.-S. Vu and A. Caplier, “Illumination-robust face recognition using retina modeling,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '09), pp. 3289–3292, Cairo, Egypt, November 2009.
[18]
T. Zhang, Y. Y. Tang, B. Fang, Z. Shang, and X. Liu, “Face recognition under varying illumination using gradientfaces,” IEEE Transactions on Image Processing, vol. 18, no. 11, pp. 2599–2606, 2009.
[19]
B. Wang, W. Li, W. Yang, and Q. Liao, “Illumination normalization based on weber's law with application to face recognition,” IEEE Signal Processing Letters, vol. 18, no. 8, pp. 462–465, 2011.
[20]
A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman, “From few to many: illumination cone models for face recognition under variable lighting and pose,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 643–660, 2001.
[21]
K.-C. Lee, J. Ho, and D. J. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 684–698, 2005.
[22]
T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression database,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 12, pp. 1615–1618, 2003.
[23]
W. Gao, B. Cao, S. Shan et al., “The CAS-PEAL large-scale chinese face database and baseline evaluations,” IEEE Transactions on Systems, Man, and Cybernetics A, vol. 38, no. 1, pp. 149–161, 2008.
[24]
M. A. Turk and A. P. Pentland, “Face recognition using eigenfaces,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 586–591, June 1991.
[25]
P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. fisherfaces: recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711–720, 1997.
[26]
T. A. Welch, “A technique for high-performance data compression,” Computer, vol. 17, no. 6, pp. 8–19, 1984.