|
人们对自主机器的道德决策期望的探索性研究
|
Abstract:
研究探讨了人们对自主机器的道德决策期望。113名被试参加了本研究,以分析人们期望自主机器面对人类命令与道德规范之间的冲突时如何决策。研究将人类命令区分为具有道德或不道德的意图。结果发现,不管人类命令的意图是否道德,人们总是期望自主机器选择绝对地遵守道德规范,而非人类的命令。
The current research explores people’s expectations of moral decision-making of autonomous machines. 113 participants participated in this study for analyzing how autonomous machines are expected to make decisions in the face of conflicts between human commands and ethical norms. Given that their owners’ orders violating moral norms can be out of both immoral intentions and moral intentions, this study also explored whether the valence of intentions would make a difference. It was revealed that regardless of owner’s intentions, people always expected autonomous machines to comply with moral norms when their master’s orders conflict with moral norms.
[1] | 李伦(2018). 人工智能与大数据伦理. 科学出版社. |
[2] | 瓦拉赫, 艾伦(2017). 道德机器: 如何让机器明辨是非. 王小红, 主译. 北京大学出版社. |
[3] | 闫坤如(2018). 人工智能的道德风险及其规避路径. 上海师范大学学报: 哲学社会科学版, 47(2), 40-47. |
[4] | 远征南(2019). 人们对自主机器道德决策期望的探索性研究. 硕士学位论文, 杭州: 浙江大学. |
[5] | 曾慧君(2019). 全球首例自动驾驶汽车致死案: Uber无责. https://www.pcauto.com.cn/news/1508/15088135.html |
[6] | Arkin, R. C. (2016). Ethics and Autonomous Systems: Perils and Promises. Proceedings of the IEEE, 104, 1779-1781.
https://doi.org/10.1109/JPROC.2016.2601162 |
[7] | Asimov, I. (1950). I, Robot. The Gnome Press. |
[8] | Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., & Rahwan, I. (2018). The Moral Machine Experiment. Nature, 563, 59-64. https://doi.org/10.1038/s41586-018-0637-6 |
[9] | Boer, D. (2015). The Robot’s Dilemma. Nature, 523, 24-26. https://doi.org/10.1038/523024a |
[10] | Critcher, C. R., Inbar, Y., & Pizarro, D. A. (2013). How Quick Decisions Illuminate Moral Character. Social Psychological and Personality Science, 4, 308-315. https://doi.org/10.1177/1948550612457688 |
[11] | Efendi?, E., Van de Calseyde, Philippe, P. F. M., & Evans, A. M. (2020). Slow Response Times Undermine Trust in Algorithmic (But Not Human) Predictions. Organizational Behavior and Human Decision Processes, 157, 103-114.
https://doi.org/10.1016/j.obhdp.2020.01.008 |
[12] | Evans, A. M., & Van De Calseyde, P. P. (2017). The Effects of Observed Decision Time on Expectations of Extremity and Cooperation. Journal of Experimental Social Psychology, 68, 50-59. https://doi.org/10.1016/j.jesp.2016.05.009 |
[13] | Everett, J. A. C., Pizarro, D. A., & Crockett, M. J. (2016). Inference of Trustworthiness from Intuitive Moral Judgments. Journal of Experimental Psychology. General, 145, 772-787. https://doi.org/10.1037/xge0000165 |
[14] | Gogoll, J., & Uhl, M. (2018). Rage against the Machine: Automation in the Moral Domain. Journal of Behavioral and Experimental Economics, 74, 97-103. https://doi.org/10.1016/j.socec.2018.04.003 |
[15] | Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of Mind Perception. Science, 315, 619.
https://doi.org/10.1126/science.1134475 |
[16] | Haidt, J., & Joseph, C. (2004). Intuitive Ethics: How Innately Prepared Intuitions Generate Culturally Variable Virtues. Daedalus, 133, 55-66. https://doi.org/10.1162/0011526042365555 |
[17] | Killen, M., Rutland, A., Abrams, D., Mulvey, K. L., & Hitti, A. (2013). Development of Intra- and Intergroup Judgments in the Context of Moral and Social-Conventional Norms. Child Development, 84, 1063-1080.
https://doi.org/10.1111/cdev.12011 |
[18] | Levine, E. E., & Schweitzer, M. E. (2014). Are Liars Ethical? On the Tension between Benevolence and Honesty. Journal of Experimental Social Psychology, 53, 107-117. https://doi.org/10.1016/j.jesp.2014.03.005 |
[19] | Levine, E. E., & Schweitzer, M. E. (2015). Prosocial Lies: When Deception Breeds Trust. Organizational Behavior and Human Decision Processes, 126, 88-106. https://doi.org/10.1016/j.obhdp.2014.10.007 |
[20] | Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015). Sacrifice One for the Good of Many? People Apply Different Moral Norms to Human and Robot Agents. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (pp. 117-124). ACM. https://doi.org/10.1145/2696454.2696458 |
[21] | Meder, B., Fleischhut, N., Krumnau, N., & Waldmann, M. R. (2019). How Should Autonomous Cars Drive? A Preference for Defaults in Moral Judgments under Risk and Uncertainty: How Should Autonomous Cars Drive? Risk Analysis, 39, 295-314. https://doi.org/10.1111/risa.13178 |
[22] | Ochs, E., & Izquierdo, C. (2009). Responsibility in Childhood: Three Developmental Trajectories. Ethos, 37, 391-413.
https://doi.org/10.1111/j.1548-1352.2009.01066.x |
[23] | Piazza, J., & Landy, J. F. (2013). “Lean Not on Your Own Understanding”: Belief That Morality Is Founded on Divine Authority and Non-Utilitarian Moral Judgments. Judgment and Decision Making, 8, 639-661.
https://doi.org/10.1037/t28311-000 |
[24] | Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J., Breazeal, C., Wellman, M. et al. (2019). Machine Behaviour. Nature (London), 568, 477-486. https://doi.org/10.1038/s41586-019-1138-y |
[25] | Shen, S. (2011). The Curious Case of Human-Robot Morality. Proceedings of the 6th ACM/IEEE International Conference on Human-Robot Interaction, Lausanne, 6-9 March 2011, 249-250. https://doi.org/10.1145/1957656.1957755 |
[26] | Subburaman, R., Kanoulas, D., Muratore, L., Tsagarakis, N. G., & Lee, J. (2019). Human Inspired Fall Prediction Method for Humanoid Robots. Robotics and Autonomous Systems, 121, Article ID: 103257.
https://doi.org/10.1016/j.robot.2019.103257 |