The increasing number of traffic accidents is principally caused by fatigue. In fact, the fatigue presents a real danger on road since it reduces driver capacity to react and analyze information. In this paper we propose an efficient and nonintrusive system for monitoring driver fatigue using yawning extraction. The proposed scheme uses face extraction based support vector machine (SVM) and a new approach for mouth detection, based on circular Hough transform (CHT), applied on mouth extracted regions. Our system does not require any training data at any step or special cameras. Some experimental results showing system performance are reported. These experiments are applied over real video sequences acquired by low cost web camera and recorded in various lighting conditions. 1. Introduction The increasing number of traffic accidents due to a diminished driver’s vigilance level has become a serious problem for society. Statistics show that 20% of all the traffic accidents are due to drivers with a diminished vigilance level [1]. Furthermore, accidents related to driver hypovigilance are more serious than other types of accidents, since hypovigilant drivers do not take correct action prior to a collision. The active safety research focuses on studying the prevention of such accidents by developing systems for monitoring vigilance level and alerting the driver when he is not paying attention to the road. Hypovigilance can be generally identified by sensing physiological characteristics, driver operations, or vehicle responses or monitoring driver’s responses. Among these methods, the techniques based on human physiological phenomena are the most accurate. These techniques are implemented in two ways: measuring changes in physiological signals, such as brain waves, heart rate, and eye blinking and measuring physical changes such as the driver’s head pose and the state of the eyes or the mouth. The first technique, while being most accurate, is not realistic since sensing electrodes would be attached directly on the driver’s body and hence would be annoying and distracting. In addition, long time driving would result in perspiration on the sensors, diminishing their ability to monitor accurately. The techniques based on measuring physical changes are nonintrusive and more suitable for real world driving conditions since they use video cameras to detect changes. Eye state analysis [2–6], head pose estimation [7, 8], and mouth state analysis [9, 10] are the most relevant physical changes allowing the detection of driver’s hypovigilance. Driver operations and
References
[1]
L. Bergasa, J. Nuevo, M. Sotelo, and M. Vazquez, “Real-time system for monitoring driver vigilance,” IEEE Transactions on Intelligent Transportation Systems, vol. 7, no. 1, pp. 63–77, 2006.
[2]
N. P. Papanikolopoulos and M. Eriksson, “Driver fatigue: a vision-based approach to automatic diagnosis,” Transportation Research C: Emerging Technologies, vol. 9, no. 6, pp. 399–413, 2001.
[3]
G. Zhang, B. Cheng, R. Feng, and X. Zhang, “A real-time adaptive learning method for driver eye detection,” in Digital Image Computing: Techniques and Applications, pp. 300–304, 2008.
[4]
R. Grace, V. Byrne, D. Bierman et al., “A drowsy driver detection system for heavy vehicles,” in Proceedings of the 17th Digital Avionics Systems Conference, vol. 2, pp. 136/1–136/8, 2001.
[5]
D. Tripathi and N. Rath, “A novel approach to solve drowsy driver problem by using eye-localization technique using CHT,” International Journal of Recent Trends in Engineering, vol. 2, no. 2, pp. 139–145, 2009.
[6]
T. D’Orazio, M. Leo, P. Spagnolo, and C. Guaragnella, “A neural system for eye detection in a driver vigilance application,” in Proceedings of the 7th International IEEE Conference on Intelligent Transportation Systems (ITSC ’04), pp. 320–325, October 2004.
[7]
P. Smith, M. Shah, and N. da Vitoria Lobo, “Monitoring head/eye motion for driver alertness with one camera,” in Proceedings of the 15th International Conference on Pattern Recognition (ICPR ’00), vol. 4, pp. 636–642, Barcelona, Spain, 2000.
[8]
T. Wang and P. Shi, “Yawning detection for determining driver drowsiness,” in Proceedings of the IEEE International Workshop on VLSI Design and Video Technology, pp. 373–376, Suzhou, China, May 2005.
[9]
M. Mohanty, A. Mishra, and A. Routray, “A non-rigid motion estimation algorithm for yawn detection in human drivers,” International Journal of Computational Vision and Robotics, vol. 1, no. 1, pp. 89–109, 2009.
[10]
M. Saradadevi and P. Bajaj, “Driver fatigue detection using Mouth and Yawning analysis,” International Journal of Computer Science and Network Security, vol. 8, no. 6, 2008.
[11]
A. L. Yuille, P. W. Hallinan, and D. S. Cohen, “Feature extraction from faces using deformable templates,” International Journal of Computer Vision, vol. 8, no. 2, pp. 99–111, 1992.
[12]
M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: active contour models,” International Journal of Computer Vision, vol. 1, no. 4, pp. 321–331, 1988.
[13]
T. F. Coates, G. J. Edwards, and C. J. Taylor, “Active appearance model,” in European Conference on Computer Vision, pp. 484–498, 1998.
[14]
Z. Zhu, K. Fujimura, and Q. Ji, “Real-time eye detection and tracking under various light conditions,” in Proceedings of ETRA: Eye Tracking Research & Applications Symposium, pp. 139–144, ACM Press, New York, NY, USA, 2002.
[15]
A. Haro, M. Flickner, and I. Essa, “Detecting and tracking eyes by using their physiological properties, dynamics, and appearance,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’00), vol. 1, pp. 163–168, Hilton Head Island, SC , USA, June 2000.
[16]
W. Zhang, H. Chen, P. Yao, B. Li, and Z. Zhuang, “Precise eye localization with AdaBoost and fast radial symmetry,” in Proceedings of the International Conference on Computational Intelligence and Security (ICCIAS ’06), pp. 725–730, October 2006.
[17]
W. Rongben, G. Lie, T. Bingliang, and J. Lisheng, “Monitoring mouth movement for driver fatigue or distraction with one camera,” in Proceedings of the 7th IEEE International Conference on Intelligent Transportation Systems, pp. 314–319, October 2004.
[18]
T. Kawaguchi, D. Hidaka, and M. Rizon, “Detection of eyes from human faces by Hough transform and separability filter,” in Proceedings of the International Conference on Image Processing (ICIP ’00), pp. 49–52, Vancouver, Canada, September 2000.
[19]
Z. Zhou and X. Geng, “Projection functions for eye detection,” Pattern Recognition, vol. 37, no. 5, pp. 1049–1056, 2004.
[20]
F. Timm and E. Barth, “Accurate eye centre localisation by means of gradients,” in Proceedings of the International Conference on Computer Vision Theory and Application (VISAPP ’11), pp. 125–130, INSTICC, Algarve, Portugal, March 2011.
[21]
X. Fan, B.-C. Yin, and Y.-F. Sun, “Yawning detection for monitoring driver fatigue,” in Proceedings of the 6th International Conference on Machine Learning and Cybernetics (ICMLC ’07), vol. 2, pp. 664–668, Hong Kong, China, August 2007.
[22]
C. J. C. Burges, “A tutorial on support vector machines for pattern recognition,” Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121–167, 1998.
[23]
S. Romdhani, P. Torr, B. Sch?lkopf, and A. Blake, “Computationally efficient face detection,” in Proceedings of the 8th International Conference on Computer Vision, vol. 2, pp. 695–700, July 2001.
[24]
M. Franz, W. Kienzle, G. Bakir, and B. Scholkopf, “Face detection-efficient and rank deficient,” Advances in Neural Information Processing Systems, vol. 17, pp. 673–680, 2005.
[25]
R. O. Duda and P. E. Hart, “Use of the Hough transformation to detect lines and curves in pictures,” Communications of the ACM, vol. 15, no. 1, pp. 11–15, 1972.
[26]
B. Hrishikesh, S. Mahajan, A. Bhagwat et al., “Design of drodeasys (drowsy detection and alarming system),” Advances in Computational Algorithms and Data Analysis, pp. 75–79, 2009.