|
人机信任中的信任滥用和信任缺乏
|
Abstract:
信任是人与人之间互动的核心组成部分,也是在我们的社会中建立对人工智能的接受度的一个重要因素,但信任的复杂性使得设计合适的信任水平具有挑战性。这可能会导致人和机器之间出现信任或不信任的情况。本文基于人机信任的相关文献,梳理了人机信任概念和测量方法,并且对信任滥用和信任缺乏的实证研究进行了总结。
Trust is a core component of human interaction and an important factor in building acceptance of AI in our society, but the complexity of trust makes it challenging to design the right level of trust. This can lead to situations of trust or mistrust between humans and machines. This paper compares the concept and measurement of human-machine trust based on the literature on human- machine trust, and summarizes empirical research on trust abuse and trust deficit.
[1] | 高在峰, 李文敏, 梁佳文, 潘晗希, 许为, 沈模卫(2021). 自动驾驶车中的人机信任. 心理科学进展, 29(12), 2172-2183. |
[2] | Adnan, N., Md Nordin, S., bin Bahruddin, M.A., & Ali, M. (2018). How Trust Can Drive forward the User Acceptance to the Technology? In-Vehicle Technology for Autonomous Vehicle. Transportation Research Part A: Policy and Practice, 118, 819-836. https://doi.org/10.1016/j.tra.2018.10.019 |
[3] | Bansal, G., Nushi, B., Kamar, E., Lasecki, W. S., Weld, D. S., & Horvitz, E. (2019). Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7, 2-11. |
[4] | Bu?inca, Z., Lin, P., Gajos, K. Z., & Glassman, E. L. (2020). Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI’20) (pp. 454-464). ACM. https://doi.org/10.1145/3377325.3377498 |
[5] | Bu?inca, Z., Malaya, M. B., & Gajos, K. Z. (2021). To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-Assisted Decision-Making. Proceedings of the ACM on Human-Computer Interaction, 5, 1-21.
https://doi.org/10.1145/3449287 |
[6] | Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-Dependent Algo-rithm Aversion. Journal of Marketing Research, 56, 809-825. https://doi.org/10.1177/0022243719851788 |
[7] | Danaher, J. (2016). The Threat of Algocracy: Reality, Resistance and Accommodation. Philosophy & Technology, 29, 245-268.
https://doi.org/10.1007/s13347-015-0211-1 |
[8] | Danaher, J., Hogan, M. J., Noone, C., Kennedy, R., Behan, A., De Paor, A., Felzmann, H., Haklay, M., Khoo, S.-M., Morison, J. et al. (2017). Algorithmic Governance: Developing a Research Agenda through the Power of Collective Intelligence. Big Data & Society, 4. https://doi.org/10.1177/2053951717726554 |
[9] | Fraedrich, E., & Lenz, B. (2014). Automated Driving: Individual and Societal Aspects. Transportation Research Record, 2416, 64-72. https://doi.org/10.3141/2416-08 |
[10] | French, B., Duenser, A., & Heathcote, A. (2018). Trust in Automation—A Literature Review. CSIRO. |
[11] | Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied Artificial Intelligence and Trust—The Case of Autonomous Vehicles and Medical Assistance Devices. Technological Forecasting and Social Change, 105, 105-120.
https://doi.org/10.1016/j.techfore.2015.12.014 |
[12] | Hergeth, S., Lorenz, L., Vilimek, R., & Krems, J. F. (2016). Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust during Highly Automated Driving. Human Factors, 58, 509-519.
https://doi.org/10.1177/0018720815625744 |
[13] | Hoff, K. A., & Bashir, M. (2015). Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Human Factors, 57, 407-434. https://doi.org/10.1177/0018720814547570 |
[14] | Hoffman, R. R. (2017). A Taxonomy of Emergent Trusting in the Human-Machine Relationship. In P. J. Smith, & R. R. Hoffman (Eds.), Cognitive Systems Engineering: The Future for a Changing World (pp. 137-164). CRC Press.
https://doi.org/10.1201/9781315572529-8 |
[15] | Holthausen, B. E., Wintersberger, P., Walker, B. N., & Riener, A. (2020). Situational Trust Scale for Automated Driving (STS-AD): Development and Initial Validation. In 12th Interna-tional Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 40-47). Association for Computing Machinery.
https://doi.org/10.1145/3409120.3410637 |
[16] | Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an Empirically Determined Scale of Trust in Automated Systems. International Journal of Cognitive Ergonomics, 4, 53-71. https://doi.org/10.1207/S15327566IJCE0401_04 |
[17] | Jiang, H., Kim, B., Guan, M. Y., & Gupta, M. (2018). To Trust or Not to Trust a Classifier. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (pp. 5546-5557). Curran Associates Inc. |
[18] | K?rber, M. (2018). Theoretical Considerations and Development of a Questionnaire to Measure Trust In automation. In Congress of the International Ergonomics Association (pp. 13-30). Springer, Cham. https://doi.org/10.31234/osf.io/nfc45 |
[19] | Langer, M., K?nig, C. J., & Papathanasiou, M. (2019). Highly Automated Job Interviews: Acceptance under the Influence of Stakes. International Journal of Selection and Assessment, 27, 217-234. https://doi.org/10.1111/ijsa.12246 |
[20] | Langer, M., K?nig, C. J., Back, C., & Hemsing, V. (2022). Trust in Artificial Intelligence: Comparing Trust Processes between Human and Automated Trustees in Light of Unfair Bias. Journal of Business and Psychology, 1-16.
https://doi.org/10.1007/s10869-022-09829-9 |
[21] | Lee, J. D., & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, 46, 50-80. https://doi.org/10.1518/hfes.46.1.50.30392 |
[22] | Lee, J., Abe, G., Sato, K., & Itoh, M. (2021). Developing Hu-man-Machine Trust: Impacts of Prior Instruction and Automation Failure on Driver Trust in Partially Automated Vehicles. Transportation Research Part F: Traffic Psychology and Behaviour, 81, 384-395. https://doi.org/10.1016/j.trf.2021.06.013 |
[23] | Lee, M. K. (2018). Understanding Perception of Algorithmic Deci-sions: Fairness, Trust, and Emotion in Response to Algorithmic Management. Big Data & Society, 5, 1-16. https://doi.org/10.1177/2053951718756684 |
[24] | Lee, M. K., & Rich, K. (2021). Who Is Included in Human Perceptions of AI?: Trust and Perceived Fairness around Healthcare AI and Cultural Mistrust. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-14). Association for Computing Machinery. https://doi.org/10.1145/3411764.3445570 |
[25] | Lee, M. K., Kusbit, D., Metsky, E., & Dabbish, L. (2015). Working with Machines: The Impact of Algorithmic and Data-Driven Management on Human Workers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 1603-1612). https://doi.org/10.1145/2702123.2702548 |
[26] | Lewis, J. D., & Weigert, A. (1985). Trust as a Social Reality. Social Forces, 63, 967-985. https://doi.org/10.2307/2578601 |
[27] | Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organizational Behavior and Human Decision Processes, 151, 90-103. https://doi.org/10.1016/j.obhdp.2018.12.005 |
[28] | Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to Medical Artificial Intelligence. Journal of Consumer Research, 46, 629-650. https://doi.org/10.1093/jcr/ucz013 |
[29] | Lyons, J. B., & Guznov, S. Y. (2019). Individual Differences in Hu-man-Machine Trust: A Multi-Study Look at the Perfect Automation Schema. Theoretical Issues in Ergonomics Science, 20, 440-458.
https://doi.org/10.1080/1463922X.2018.1491071 |
[30] | Matsui, T., & Yamada, S. (2019). Designing Trustworthy Product Recommendation Virtual Agents Operating Positive Emotion and Having Copious Amount of Knowledge. Frontiers in Psychology, 10, Article 675.
https://doi.org/10.3389/fpsyg.2019.00675 |
[31] | Merritt, S. M., & Ilgen, D. R. (2008). Not All Trust Is Created Equal: Dispositional and History-Based Trust in Human-Automation Interactions. Human Factors: The Journal of the Human Factors and Ergonomics Society, 50, 194-210.
https://doi.org/10.1518/001872008X288574 |
[32] | Nuamah, J., Oh, S., & Seong, Y. (2015). Measuring Trust in Automation: A New Approach. In Proceedings of the Modern Artificial Intelligence and Cognitive Science Confe-rence. |
[33] | Sethumadhavan, A. (2019). Trust in Artificial Intelligence. Ergonomics in Design: The Quarterly of Human Factors Applications, 27, 34-34. https://doi.org/10.1177/1064804618818592 |
[34] | Waytz, A., Heafner, J., & Epley, N. (2014). The Mind in the Machine: Anthropomorphism Increases Trust in an Autonomous Vehicle. Journal of Experimental Social Psychology, 52, 113-117. https://doi.org/10.1016/j.jesp.2014.01.005 |