全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于人机交互视角的社交机器人行为责任探析
Analysis of Behavioral Responsibility of Social Robots from the Perspective of Human-Computer Interaction

DOI: 10.12677/mse.2024.136119, PP. 1098-1104

Keywords: 人机交互,社交机器人,归因理论,行为责任
Human-Computer Interaction
, Social Robots, Attribution Theory, Responsibility for Behavior

Full-Text   Cite this paper   Add to My Lib

Abstract:

社交机器人正逐渐渗透至人们的日常生活各个领域,使得人类与这些智能实体的互动变得越来越普遍和频繁。为了深入理解并剖析人与社交机器人之间的复杂交互机制,我们借鉴了心理学中的凯利归因理论作为分析框架。这一理论的核心思想在于,个体如何解释他人或他物的行为,往往基于他们将行为原因归因于何种因素——内部(如性格、意愿)或外部(如环境压力、外部控制)。在此基础上,我们划分并探讨了人们将机器人行为责任归因于机器人的多种具体情境。通过运用归因理论进行逻辑推导和演绎,我们发现:当机器人的反馈被视作自主决策的结果时,相较于被视为仅仅根据预设程序执行的反馈,前者在自主性、责任感以及能力方面获得了人类更高的评价。这表明,机器人的自主性水平与其社交表现及人类与之互动的评价之间存在正相关关系。也就是说,如果机器人被赋予更多的自主决策权,人们往往对其社交能力给予更高的认可,同时与这样的机器人互动也会带来更加积极正面的体验。这一发现不仅加深了我们对人机互动本质的理解,也为未来社交机器人的设计和开发提供了有价值的参考方向。
Social robots are increasingly penetrating various aspects of people’s daily lives, making interactions between humans and these intelligent entities more common and frequent. To gain a deeper understanding and dissect the complex interaction mechanisms between humans and social robots, we have adopted Kelly’s Attribution Theory from psychology as an analytical framework. The core idea of this theory lies in how individuals interpret the behaviors of others or objects, often based on whether they attribute the causes of these behaviors to internal factors (such as personality, intentions) or external factors (such as environmental pressures, external controls). Building on this, we have categorized and explored various specific situations in which people attribute responsibility for robot behaviors to the robots themselves. Through logical deduction and reasoning using attribution theory, we have found that when a robot’s feedback is seen as the result of autonomous decision-making, it receives higher evaluations from humans in terms of autonomy, responsibility, and capability compared to feedback perceived as merely executing pre-programmed instructions. This indicates a positive correlation between the level of autonomy granted to robots and their social performance, as well as the evaluations of human interactions with them. In other words, when robots are endowed with more autonomous decision-making power, people tend to give higher recognition to their social abilities, and interacting with such robots leads to more positive and favorable experiences. This finding not only deepens our understanding of the nature of human-robot interactions but also provides valuable guidance for the design and development of future social robots, particularly in terms of how to further enhance their autonomy and intelligence while ensuring safety and controllability, thereby fostering a more natural and harmonious coexistence between humans and robot.

References

[1]  Sandoval, E.B., Brandstetter, J., Obaid, M. and Bartneck, C. (2015) Reciprocity in Human-Robot Interaction: A Quantitative Approach through the Prisoner’s Dilemma and the Ultimatum Game. International Journal of Social Robotics, 8, 303-317.
https://doi.org/10.1007/s12369-015-0323-x
[2]  Bigman, Y.E., Waytz, A., Alterovitz, R. and Gray, K. (2019) Holding Robots Responsible: The Elements of Machine Morality. Trends in Cognitive Sciences, 23, 365-368.
https://doi.org/10.1016/j.tics.2019.02.008
[3]  Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y. and Kankanhalli, M. (2018) Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, 21-26 April 2018, 1-18.
https://doi.org/10.1145/3173574.3174156
[4]  Jackson, J.C., Castelo, N. and Gray, K. (2020) Could a Rising Robot Workforce Make Humans Less Prejudiced? American Psychologist, 75, 969-982.
https://doi.org/10.1037/amp0000582
[5]  邓俊, 易欣妍, 傅诗婷. 社交机器人如何提升用户社会临场感? 表情包情感效价在人智对话交互中的作用[J]. 图书情报知识, 2023, 40(2): 29-39.
[6]  Carolus, A., Muench, R., Schmidt, C. and Schneider, F. (2019) Impertinent Mobiles—Effects of Politeness and Impoliteness in Human-Smartphone Interaction. Computers in Human Behavior, 93, 290-300.
https://doi.org/10.1016/j.chb.2018.12.030
[7]  Bigman, Y.E. and Gray, K. (2018) People Are Averse to Machines Making Moral Decisions. Cognition, 181, 21-34.
https://doi.org/10.1016/j.cognition.2018.08.003
[8]  贾微微, 李晗, 别永越, 张浩瑜. 用户对社交机器人智慧信息服务的感知价值影响机理研究——来自元分析的证据[J]. 情报科学, 2023, 41(4): 72-82.
[9]  Gogoll, J. and Uhl, M. (2018) Rage against the Machine: Automation in the Moral Domain. Journal of Behavioral and Experimental Economics, 74, 97-103.
https://doi.org/10.1016/j.socec.2018.04.003
[10]  Hechler, S. and Kessler, T. (2018) On the Difference between Moral Outrage and Empathic Anger: Anger about Wrongful Deeds or Harmful Consequences. Journal of Experimental Social Psychology, 76, 270-282.
https://doi.org/10.1016/j.jesp.2018.03.005
[11]  Kusner, M.J. and Loftus, J.R. (2020) The Long Road to Fairer Algorithms. Nature, 578, 34-36.
https://doi.org/10.1038/d41586-020-00274-3
[12]  Longoni, C., Bonezzi, A. and Morewedge, C.K. (2019) Resistance to Medical Artificial Intelligence. Journal of Consumer Research, 46, 629-650.
https://doi.org/10.1093/jcr/ucz013

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133