This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN) to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model. 1. Introduction Face recognition is a visual pattern recognition problem. In detail, a face recognition system with the input of an arbitrary image will search in database to output people’s identification in the input image. A face recognition system generally consists of four modules as depicted in Figure 1: detection, alignment, feature extraction, and matching, where localization and normalization (face detection and alignment) are processing steps before face recognition (facial feature extraction and matching) is performed [1]. Figure 1: Structure of a face recognition system. Face detection segments the face areas from the background. In the case of video, the detected faces may need to be tracked using a face tracking component. Face alignment aims at achieving more accurate localization and at normalizing faces thereby, whereas face detection provides coarse estimates of the location and scale of each detected face. Facial components, such as eyes, nose, and mouth and facial outline, are located; based on the location points, the input face image is normalized with respect to geometrical properties, such as size and pose, using geometrical transforms or morphing. The face is usually further normalized with respect to photometrical properties such illumination and gray scale. After a face is normalized geometrically and photometrically,
References
[1]
S. Z. Li and A. K. Jain, Handbook of Face Recognition, Springer, New York, NY, USA, 2004.
[2]
C. Bishop, Pattern Recognition and Machine Learning, Springer, New York, NY, USA, 2006.
[3]
CBCL Database #1, Center for Biological and Computational Learning at MIT and MIT, http://cbcl.mit.edu/software-datasets/FaceData2.html.
[4]
Markus Weber, Frontal Face Database, California Institute of Technology, 1999, http://www.vision.caltech.edu/html-files/archive.html/.
[5]
M. H. Yang, D. J. Kriegman, and N. Ahuja, “Detecting faces in images: a survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp. 34–58, 2002.
[6]
P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 511–518, December 2001.
[7]
H. A. Rowley, Neural Network Based Face Detection, Neural network Based Face Detection, School of Computer Science, Computer Science Department, Carnegie Mellon University, Pittsburgh, Pa, USA, 1999.
[8]
T. F. Cootes, “Statistical models of appearance for computer vision,” http://www.isbe.man.ac.uk/~bim/refs.html/.
[9]
F. Jiao, S. Li, H.-Y. Shum, and D. Schuurmans, “Face alignment using statistical models and wavelet features,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2003.
[10]
F. Zuo and P. H. N. D. With, “Fast facial feature extraction using a deformable shape model with haar-wavelet based local texture attributes,” in Proceedings of the International Conference on Image Processing (ICIP '04), pp. 1425–1428, October 2004.
[11]
S. Yan, M. Li, H. Zhang, and Q. Cheng, “Ranking prior likelihood distributions for Bayesian shape localization framework,” in Proceedings of the 8th IEEE International Conference on Computer Vision, pp. 51–58, October 2003.
[12]
J. Tu, Z. Zhang, Z. Zeng, and T. Huang, “Face localization via hierarchical CONDENSATION with Fisher Boosting feature selection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '04), pp. I719–I724, July 2004.
[13]
S. Marcel, “Artificial neural network for pattern recognition: application to face detection and recognition,” 2004, http://www.idiap.ch/~marcel/.
[14]
T. Kawaguchi, D. Hidaka, and M. Rizon, “Detection of eyes from human faces by Hough transform and separability filter,” in Proceedings of the International Conference on Image Processing, vol. 1, no. 2000, pp. 49–52, Vancouver, Canada, 2000.
[15]
A. L. Yuille, D. S. Cohen, and P. W. Hallinan, “Feature extraction from faces using deformable template. Computer vision and pattern recognition,” in Proceedings of the IEEE Computer Society Conference, pp. 104–109, San Diego, CA , USA, June 1989.
[16]
L. Zhang, “Estimation of the mouth features using deformable templates,” in Proceedings of the International Conference on Image Processing, vol. 3, pp. 328–331, Santa Barbara, CA , USA, October 1997.
[17]
P. Kuo and J. Hannah, “An improved eye feature extraction algorithm based on deformable templates,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '05), pp. 1206–1209, September 2005.
[18]
S. L. Phung, A. Bouzerdoum, and D. Chai, “Skin segmentation using color and edge information. Signal processing and its applications,” in Proceedings of the 7th International Symposium, vol. 1, pp. 525–528, July 2003.
[19]
T. Sawangsri, V. Patanavijit, and S. Jitapunkul, “Segmentation using novel skin-color map and morphological technique,” in Proceedings of the World Academy of Science, Engineering and Technology, vol. 2, January 2005.
[20]
M. Turk and A. Pentland, “Face recognition using eigenfaces,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–591, 1991.
[21]
B. A. Draper, K. Baek, M. S. Bartlett, and J. R. Beveridge, “Recognizing faces with PCA and ICA,” Computer Vision and Image Understanding, vol. 91, no. 1-2, pp. 115–137, 2003.
[22]
M. S. Bartlett, J. R. Movellan, and T. J. Sejnowski, “Face recognition by independent component analysis,” IEEE Transactions on Neural Networks, vol. 13, no. 6, pp. 1450–1464, 2002.
[23]
P. Comon, “Independent component analysis—a new concept?” Signal Processing, vol. 36, no. 3, pp. 287–314, 1994.
[24]
T. T. Do and T. H. Le, “Facial feature extraction using geometric feature and independent component analysis,” in Proceedings of the Pacific Rim Knowedge Acquisition Workshop (PKAW '08), Hanoi, Vietnam, December 2008, (Revised Selected Papers) in Knowledge Acquisition: Approaches, Algorithms and Applications, Lecture Notes in Artificial Intelligence, Springer, Berlin, Germany, pp.231–241, 2009.
[25]
A. Hyv?rinen and E. Oja, “Independent component analysis: algorithms and applications,” Neural Networks, vol. 13, no. 4-5, pp. 411–430, 2000.
[26]
O. A. Uwechue and A. S. Pandya, Human Face Recognition using Third-Order Synthetic Neural Networks, The Springer International Series in Engineering and Computer Science, Springer, 1st edition, 1997.
[27]
H. A. Rowley, S. Baluja, and T. Kanade, “Neural network-based face detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 23–38, 1998.
[28]
S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: a convolutional neural network approach,” IEEE Transactions on Neural Networks, Special Issue on Neural Networks and Pattern Recognition, vol. 8, no. 1, pp. 98–113, 1997.
[29]
T. H. Le, N. T. D. Nguyen, and H. S. Tran, “Landscape image of regional tourism classification using neural network,” in Proceedings of the 3rd International Conference on Communications and Electronics (ICCE '10), Nha Trang, Vietnam, August 2010.
[30]
L. H. Thai, Building, development and application, some combination model of neural network (NN), fuzzy logic(FL) and genetics algorithm (GA), Ph.D. thesis, Natural Science University, HCM City, Vietnam, 2004.
[31]
P. J. Phillips, H. Moon, P. J. Rauss, and S. A. Rizvi, “The FERET evaluation methodology for face-recognition algorithms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 10, pp. 1090–1104, 2000.
[32]
R. Lienhart and J. Maydt, “An extended set of Haar-like features for rapid object detection,” in Proceedings of the International Conference on Image Processing, September 2002.