Whiting Tim, Gautam Alvika, Tye Jacob, Simmons Michael, Henstrom Jordan, Oudah Mayada, Crandall Jacob W
Brigham Young University, Provo, UT 84602, USA.
Oregon State University, Corvallis, OR 97331, USA.
iScience. 2020 Dec 17;24(1):101963. doi: 10.1016/j.isci.2020.101963. eCollection 2021 Jan 22.
Many technical and psychological challenges make it difficult to design machines that effectively cooperate with people. To better understand these challenges, we conducted a series of studies investigating human-human, robot-robot, and human-robot cooperation in a strategically rich resource-sharing scenario, which required players to balance efficiency, fairness, and risk. In these studies, both human-human and robot-robot dyads typically learned efficient and risky cooperative solutions when they could communicate. In the absence of communication, robot dyads still often learned the same efficient solution, but human dyads achieved a less efficient (less risky) form of cooperation. This difference in how people and machines treat risk appeared to discourage human-robot cooperation, as human-robot dyads frequently failed to cooperate without communication. These results indicate that machine behavior should better align with human behavior, promoting efficiency while simultaneously considering human tendencies toward risk and fairness.
诸多技术和心理挑战使得设计能与人类有效协作的机器变得困难。为了更好地理解这些挑战,我们开展了一系列研究,在一个策略丰富的资源共享场景中调查人与人、机器人与机器人以及人与机器人之间的协作,该场景要求参与者在效率、公平和风险之间进行权衡。在这些研究中,人与人以及机器人与机器人的二元组在能够交流时通常会学习高效且有风险的合作解决方案。在缺乏交流的情况下,机器人二元组仍常常会学习到相同的高效解决方案,但人类二元组实现的合作形式效率较低(风险较小)。人与机器在对待风险方式上的这种差异似乎阻碍了人机合作,因为人机二元组在没有交流时常常无法进行合作。这些结果表明,机器行为应更好地与人类行为保持一致,在提高效率的同时兼顾人类对风险和公平的倾向。