全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Emotion-Aware Assistive System for Humanistic Care Based on the Orange Computing Concept

DOI: 10.1155/2012/183610

Full-Text   Cite this paper   Add to My Lib

Abstract:

Mental care has become crucial with the rapid growth of economy and technology. However, recent movements, such as green technologies, place more emphasis on environmental issues than on mental care. Therefore, this study presents an emerging technology called orange computing for mental care applications. Orange computing refers to health, happiness, and physiopsychological care computing, which focuses on designing algorithms and systems for enhancing body and mind balance. The representative color of orange computing originates from a harmonic fusion of passion, love, happiness, and warmth. A case study on a human-machine interactive and assistive system for emotion care was conducted in this study to demonstrate the concept of orange computing. The system can detect emotional states of users by analyzing their facial expressions, emotional speech, and laughter in a ubiquitous environment. In addition, the system can provide corresponding feedback to users according to the results. Experimental results show that the system can achieve an accurate audiovisual recognition rate of 81.8% on average, thereby demonstrating the feasibility of the system. Compared with traditional questionnaire-based approaches, the proposed system can offer real-time analysis of emotional status more efficiently. 1. Introduction During the past 200 years, the industrial revolution has caused a considerable effect on human lifestyles [1, 2]. A number of changes occurred [3] with the rapid growth of the economy and technology, including the information revolution [3], the second industrial revolution [4], and biotechnology development. Although such evolution was considerably beneficial to humans, it has caused a number of problems, such as capitalism, utilitarianism, poverty gap, global warming, and an aging population [1, 2]. Because of recent changes, a number of people recognized these crises and appealed for effective solutions [5], for example, the green movement [6], which successfully creates awareness of environmental protection and leads to the development of green technology or green computing. However, the green movement does not concentrate on body and mind balance. Therefore, a solution that is feasible for shortening the discrepancy between technology and humanity is of utmost concern. In 1972, the King of Bhutan proposed a new concept that used gross national happiness (GNH) [7] to describe the standard of living of a country, instead of using gross domestic product (GDP). The GNH has attracted considerable attention because it measured the mental health of

References

[1]  M. C. Jensen, “The modern industrial revolution, exit, and the failure of internal control systems,” Journal of Applied Corporate Finance, vol. 22, no. 1, pp. 43–58, 1993.
[2]  F. Dunachie, “The success of the industrial revolution and the failure of political revolutions: how Britain got lucky,” Historical Notes, vol. 26, pp. 1–7, 1996.
[3]  Y. Veneris, “Modelling the transition from the industrial to the informational revolution,” Environment & Planning A, vol. 21, no. 3, pp. 399–416, 1990.
[4]  J. Hull, “The second industrial revolution: the history of a concept,” Storia Della Storiografia, vol. 36, pp. 81–90, 1999.
[5]  P. Ashworth, “High technology and humanity for intensive care,” Intensive Care Nursing, vol. 6, no. 3, pp. 150–160, 1990.
[6]  P. Gilk, Green Politics is Eutopian, Lutterworth Press, Cambridge, UK, 2009.
[7]  S. B. F. Hargens, “Integral development—taking the middle path towards gross national happiness,” Journal of Bhutan Studies, vol. 6, pp. 24–87, 2002.
[8]  A. J. Oswald, “Happiness and economic performance,” Economic Journal, vol. 107, no. 445, pp. 1815–1831, 1997.
[9]  D. Kahneman, E. Diener, and N. Schwarz, Well-Being : The Foundations of Hedonic Psychology, Russell Sage Foundation Publications, New York, NY, USA, 1998.
[10]  K. Passino, “World-wide education for the humanitarian technology challenge,” IEEE Technology and Society Magazine, vol. 29, no. 2, p. 4, 2010.
[11]  K. Lorincz, D. J. Malan, T. R. F. Fulford-Jones et al., “Sensor networks for emergency response: Challenges and opportunities,” IEEE Pervasive Computing, vol. 3, no. 4, pp. 16–23, 2004.
[12]  A. Waibel, “Speech processing in support of human-human communication,” in Proceedings of the 2nd International Symposium on Universal Communication (ISUC '08), p. 11, Osaka, Japan, December 2008.
[13]  J.-F. Wang, B.-W. Chen, Y.-Y. Chen, and Y.-C. Chen, “Orange computing: challenges and opportunities for affective signal processing,” in Proceedings of the International Conference on Signal Processing, Communications and Computing, pp. 1–4, Xian, China, September 2011.
[14]  J.-F. Wang and B.-W. Chen, “Orange computing: challenges and opportunities for awareness science and technology,” in Proceedings of the 3rd International Conference on Awareness Science and Technology, pp. 538–540, Dalian, China, September 2011.
[15]  Y. Hata, S. Kobashi, and H. Nakajima, “Human health care system of systems,” IEEE Systems Journal, vol. 3, no. 2, pp. 231–238, 2009.
[16]  K. Siau, “Health care informatics,” IEEE Transactions on Information Technology in Biomedicine, vol. 7, no. 1, pp. 1–7, 2003.
[17]  K. Kawamura, W. Dodd, and P. Ratanaswasd, “Robotic body-mind integration: Next grand challenge in robotics,” in Proceedings of the 13th IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN '04), pp. 23–28, Kurashiki, Okayama, Japan, September 2004.
[18]  P. Belimpasakis and S. Moloney, “A platform for proving family oriented RESTful services hosted at home,” IEEE Transactions on Consumer Electronics, vol. 55, no. 2, pp. 690–698, 2009.
[19]  L. S. A. Low, N. C. Maddage, M. Lech, L. B. Sheeber, and N. B. Allen, “Detection of clinical depression in adolescents' speech during family interactions,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 3, pp. 574–586, 2011.
[20]  C. Yu, J.-J. Yang, J.-C. Chen et al., “The development and evaluation of the citizen telehealth care service system: case study in Taipei,” in Proceedings of the 31st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '09), pp. 6095–6098, Minneapolis, Minn, USA, September 2009.
[21]  J. B. J?rgensen and C. Bossen, “Executable use cases: requirements for a pervasive health care system,” IEEE Software, vol. 21, no. 2, pp. 34–41, 2004.
[22]  A. Mihailidis, B. Carmichael, and J. Boger, “The use of computer vision in an intelligent environment to support aging-in-place, safety, and independence in the home,” IEEE Transactions on Information Technology in Biomedicine, vol. 8, no. 3, pp. 238–247, 2004.
[23]  Y. Hata, S. Kobashi, and H. Nakajima, “Human health care system of systems,” IEEE Systems Journal, vol. 3, no. 2, pp. 231–238, 2009.
[24]  Y. Gizatdinova and V. Surakka, “Feature-based detection of facial landmarks from neutral and expressive facial images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 1, pp. 135–139, 2006.
[25]  R. A. Calvo and S. D'Mello, “Affect detection: an interdisciplinary review of models, methods, and their applications,” IEEE Transactions on Affective Computing, vol. 1, no. 1, pp. 18–37, 2010.
[26]  C. Busso and S. S. Narayanan, “Interrelation between speech and facial gestures in emotional utterances: a single subject study,” IEEE Transactions on Audio, Speech and Language Processing, vol. 15, no. 8, pp. 2331–2347, 2007.
[27]  Y. Wang and L. Guan, “Recognizing human emotional state from audiovisual signals,” IEEE Transactions on Multimedia, vol. 10, no. 4, pp. 659–668, 2008.
[28]  Z. Zeng, J. Tu, B. M. Pianfetti, and T. S. Huang, “Audio-visual affective expression recognition through multistream fused HMM,” IEEE Transactions on Multimedia, vol. 10, no. 4, pp. 570–577, 2008.
[29]  P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. I511–I518, Kauai, Hawaii, USA, December 2001.
[30]  T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shape models-their training and application,” Computer Vision and Image Understanding, vol. 61, no. 1, pp. 38–59, 1995.
[31]  N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), pp. 886–893, San Diego, Calif, USA, June 2005.
[32]  A. Bosch, A. Zisserman, and X. Munoz, “Representing shape with a spatial pyramid kernel,” in Proceedings of the 6th ACM International Conference on Image and Video Retrieval (CIVR '07), pp. 401–408, Amsterdam, Netherlands, July 2007.
[33]  T. Jabid, M. H. Kabir, and O. Chae, “Local Directional Pattern (LDP)—a robust image descriptor for object recognition,” in Proceedings of the 7th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS '10), pp. 482–487, Boston, Mass, USA, September 2010.
[34]  C. K. Un and S.-C. Yang, “A pitch extraction algorithm based on LPC inverse filtering and AMDF,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 25, no. 6, pp. 565–572, 1977.
[35]  J.-F. Wang, J.-C. Wang, M.-H. Mo, C.-I. Tu, and S.-C. Lin, “The design of a speech interactivity embedded module and its applications for mobile consumer devices,” IEEE Transactions on Consumer Electronics, vol. 54, no. 2, pp. 870–876, 2008.
[36]  S. Casale, A. Russo, G. Scebba, and S. Serrano, “Speech emotion classification using Machine Learning algorithms,” in Proceedings of the 2nd Annual IEEE International Conference on Semantic Computing (ICSC '08), pp. 158–165, Santa Clara, Calif, USA, August 2008.
[37]  C. Busso, S. Lee, and S. Narayanan, “Analysis of emotionally salient aspects of fundamental frequency for emotion detection,” IEEE Transactions on Audio, Speech and Language Processing, vol. 17, no. 4, pp. 582–596, 2009.
[38]  N. D. Cook, T. X. Fujisawa, and K. Takami, “Evaluation of the affective valence of speech using pitch substructure,” IEEE Transactions on Audio, Speech and Language Processing, vol. 14, no. 1, pp. 142–151, 2006.
[39]  Y.-Y. Chen, B.-W. Chen, J.-F. Wang, and Y.-C. Chen, “Emotion aware system based on acoustic and textual features from speech,” in Proceedings of the 2nd International Symposium on Aware Computing (ISAC '10), pp. 92–96, Tainan, Taiwan, November 2010.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133