全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Evaluation of a Navigation Radio Using the Think-Aloud Method

DOI: 10.1155/2013/705086

Full-Text   Cite this paper   Add to My Lib

Abstract:

In this experiment, 13 licensed drivers performed 20 tasks with a prototype navigation radio. Subjects completed such tasks as entering a street address, selecting a preset radio station, and tuning to an XM station while “thinking aloud” to identify problems with operating the prototype interface. Overall, subjects identified 64 unique problems with the interface; 17 specific problems were encountered by more than half of the subjects. Problems are related to inconsistent music interfaces, limitations to destination entry methods, icons that were not understood, the lack of functional grouping, and similar looking buttons and displays, among others. An important project focus was getting the findings to the developers quickly. Having a scribe to code interactions in real time helped as well as directed observations of test sessions by representatives of the developers. Other researchers are encouraged to use this method to examine automotive interfaces as a complement to traditional usability testing. 1. Introduction People want products that are easy to use, and that is particularly true of motor vehicles. Numerous methods have been developed to assess the ease of use of driver interfaces, both traditionally, and more recently from the human-computer interaction literature [1–3]. The three most prominent methods are (1) usability testing [4–8], (2) expert reviews [9–11], and (3) the think-aloud method [12–15]. Methods vary in terms of their value for formative evaluation (while development is in progress) and summative evaluation (at the end of development). See [16] for an extensive overview of how various methods are conducted and where they should be applied. Usability testing is the gold standard of usability test methods, as it involves real users performing real tasks, though often in a laboratory setting, and can be part of either formative or summative testing. The purpose is to determine task completion times and errors. Generally, usability testing occurs in the latter stages of design, when a fully functioning interface is available. Usability tests are time-consuming to plan and analyze and can be costly. Consequently, there has been considerable interest in predicting user performance, in particular task time [17–22]. Task times for experienced users can be predicted in a fraction of the time to plan, conduct, and analyze a usability test. If the method used by subjects to perform a task is known, the predictions should be as accurate as the usability test data [23]. Expert reviews can be an efficient alternative to usability testing,

References

[1]  J. Nielsen and R. Mack, Usability Inspection Methods, John Wiley & Sons, New York, NY, USA, 1994.
[2]  S. Krug, Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems, New Riders, Berkeley, Calif, USA, 2010.
[3]  J. Rubin, D. Chisnell, and J. Spool, Handbook of Usability Testing, John Wiley & Sons, New York, NY, USA, 2008.
[4]  R. Jeffries, J. R. Miller, C. Wharton, and K. Uyeda, “User interface evaluation in the real world: a comparison of four techniques,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 119–124, 1991.
[5]  R. A. Virzi, J. F. Sorce, and L. B. Herbert, “Comparison of three usability evaluation methods: Heuristic, think-aloud, and performance testing,” in Proceedings of the 37th Annual Meeting the Human Factors and Ergonomics Society, pp. 309–313, October 1993.
[6]  R. Molich, M. R. Ede, K. Kaasgaard, and B. Karyukin, “Comparative usability evaluation,” Behaviour and Information Technology, vol. 23, no. 1, pp. 65–74, 2004.
[7]  K. Hornb?k, “Current practice in measuring usability: challenges to usability studies and research,” International Journal of Human Computer Studies, vol. 64, no. 2, pp. 79–102, 2006.
[8]  M. W. M. Jaspers, “A comparison of usability methods for testing interactive health technologies: Methodological aspects and empirical evidence,” International Journal of Medical Informatics, vol. 78, no. 5, pp. 340–353, 2009.
[9]  J. Nielsen and R. Molich, “Heuristic evaluation of user interfaces,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '90), Association for Computing Machinery, New York, NY, USA, 1990.
[10]  A. M. Lund, “Expert ratings of usability maxims,” Ergonomics in Design, vol. 5, no. 7, pp. 15–20, 1997.
[11]  E. Hvannberg, E. Law, and M. Lárusdóttir M, “Heuristic evaluation: comparing ways of finding and reporting usability problems,” Interacting with Computers, vol. 19, no. 2, pp. 225–240, 2007.
[12]  C. H. Lewis and Using the, “"Thinking Aloud" method in cognitive interface design,” Technical Report IBM RC-9265, IBM Watson Research Center, Yorktown Heights, NY, USA, 1982.
[13]  P. C. Wright and A. F. Monk, “The use of the Think-Aloud evaluation methods in design,” ACM SIGCHI Bulletin, vol. 23, no. 1, pp. 55–57, 1991.
[14]  E. Krahmer and N. Ummelen, “Thinking about thinking aloud: a comparison of two verbal protocols for usability testing,” IEEE Transactions on Professional Communication, vol. 47, no. 2, pp. 105–117, 2004.
[15]  M. Norgaard and K. Hornbaek, “What do usability evaluators do in practice? An exploratory study of Think-Aloud testing,” in Proceedings of the 6th conference on Designing Interactive systems (DIS '06), pp. 209–218, 2006.
[16]  J. Lewis, “Usability testing,” in Handbook of Human Factors and Ergonomics, G. Salvendy, Ed., chapter 46, pp. 1267–1312, John Wiley & Sons, New York, NY, USA, 4th edition, 2012.
[17]  D. Manes, P. Green, and D. Hunter, “Prediction of destination entry and retrieval times using keystroke-level models,” Technical Report UMTRI-96-37, University of Michigan Transportation Research Institute, Ann Arbor, Mich, USA, 1997, EECS-ITS LAB FT97-077.
[18]  P. Green, “Estimating compliance with the 15-Second Rule for driver-interface usability and safety,” in Proceedings of the 43rd Annual Meeting on Human Factors and Ergonomics Society, Human Factors and Ergonomics Society, CD-ROM, Santa Monica, Calif, USA, 1999.
[19]  C. Nowakowski and P. Green, “Prediction of menus selection times parked and while driving using the SAE J2365 method,” Technical Report 2000-49, University of Michigan Transportation Research Institute, Ann Arbor, Mich, USA, 2001.
[20]  Calculation of the Time to Complete in-Vehicle Navigation and Route Guidance Tasks, SAE Recommended Practice J2365, 2002.
[21]  S. Schneegass, B. Pfleging, D. Kern, and A. Schmidt, “Support for modeling interaction with automotive user interfaces,” in Proceedings of the the 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '11), pp. 71–78, 2011.
[22]  B. T.-W. Lin, P. Green, T.-P. Kang, A. Mize, A. Best, and K. Su, “Touch screen menu selection time and SAE J2365 predictions of them,” Technical Report UMTRI-2012-9, University of Michigan Transportation Research Institute, Ann Arbor, Mich, USA, 2012.
[23]  J. Sauro, “Margins of error in usability tests,” 2009, http://www.measuringusability.com/test-margin.php.
[24]  J.-S. Park, P. Green, and M. Alter, “Usability assessment of the mobis gen 3 navigation radio,” Technical Report UMTRI-2010-23, University of Michigan Transportation Research Institute, Ann Arbor, Mich, USA, 2010.
[25]  J. R. Lewis, “Sample sizes for usability tests: mostly math, not magic,” Interactions, vol. 13, no. 6, pp. 29–33, 2006.
[26]  R. A. Virzi, “Refining the test phase of usability evaluation: how many subjects is enough?” Human Factors, vol. 34, no. 4, pp. 457–468, 1992.
[27]  J. Sauro and J. R. Lewis, Quantifying the User Experience, Morgan Kaufmann, San Francisco, Calif, USA, 2012.
[28]  Y. Kinoe, “The VPA method: a method for formal verbal protocol analysis,” in Designing and Using Human-Computer Interfaces and Knowledge Based Systems, G. Salvendy and M. J. Smith, Eds., pp. 735–742, Elsevier Science, Amsterdam, The Netherlands.
[29]  Road Vehicles—Symbols for Controls, Indicators, and Tell-tales, ISO Standard 2575, 2010.
[30]  Apple Inc., "OS X Human Interface Guidelines", Apple, Cupertino, Calif, USA, 2012, http://developer.apple.com/library/mac/documentation/userexperience/conceptual/applehiguidelines/OSXHIGuidelines.pdf.
[31]  C. Nowakowski, P. Green, and O. Tsimhoni, “Common automotive navigation system usability problems and a standard test protocol to identify them,” in Proceedings of the the ITS-America Annual Meeting (CD-ROM), Intelligent Transportation Society of America, Washington, DC, USA, 2003.
[32]  Techsmith, “Morae (software),” Techsmith, Okemos, Mich, USA, 2012, http://www.techsmith.com/morae.html.
[33]  J. Reason, Human Error, Cambridge University Press, Cambridge, UK, 1990.
[34]  S. A. Shappelland and D. A. Wiegmann, “The human factors analysis and classification system-HFACS,” Final Report DOT/FAA/AM-00/7, Office of Aviation Medicine, Federal Aviation Administration, US Department of Transportation, Washington, DC, USA, 2000.
[35]  T. S. Andre, H. Rex Hartson, S. M. Belz, and F. A. McCreary, “User action framework: a reliable foundation for usability engineering support tools,” International Journal of Human Computer Studies, vol. 54, no. 1, pp. 107–136, 2001.
[36]  K. Hornbaek and E. Forkjaer, “Comparison of techniques for matching usability problem description,” Interacting With Computers, vol. 20, no. 6, pp. 505–514, 2008.
[37]  R. Khajouei, L. W. P. Peute, A. Hasman, and M. W. M. Jaspers, “Classification and prioritization of usability problems using an augmented classification scheme,” Journal of Biomedical Informatics, vol. 44, no. 6, pp. 948–957, 2011.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133