Karpus Jurgis, Krüger Adrian, Verba Julia Tovar, Bahrami Bahador, Deroy Ophelia
Faculty of Philosophy, Philosophy of Science and the Study of Religion, LMU Munich, Geschwister-Scholl-Platz 1, 80539 Munich, Germany.
Department of General and Educational Psychology, LMU Munich, Leopoldstraße 13, 80802 Munich, Germany.
iScience. 2021 Jun 1;24(6):102679. doi: 10.1016/j.isci.2021.102679. eCollection 2021 Jun 25.
We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis that people mistrust algorithms, participants trusted their AI partners to be as cooperative as humans. However, they did not return AI's benevolence as much and exploited the AI more than humans. These findings warn that future self-driving cars or co-working robots, whose success depends on humans' returning their cooperativeness, run the risk of being exploited. This vulnerability calls not just for smarter machines but also better human-centered policies.
尽管存在被剥削或伤害的风险,我们仍会与他人合作。如果未来的人工智能(AI)系统对我们友善且愿意合作,我们会如何回报呢?在这里,我们表明,当与AI互动时,我们的合作倾向会变弱。在九项实验中,人类在四个经典的社会困境经济游戏以及我们在此引入的一个新设计的互惠游戏中,与另一个人或AI代理进行互动。与人们不信任算法的假设相反,参与者信任他们的AI伙伴能像人类一样具有合作性。然而,他们对AI的善意回报不如对人类,并且比对人类更多地剥削了AI。这些发现警告说,未来那些成功取决于人类回报其合作性的自动驾驶汽车或协作机器人,存在被剥削的风险。这种脆弱性不仅需要更智能的机器,还需要更好的以人为本的政策。