Suppr超能文献

机器人公平性对人类奖惩行为和人机协作团队信任的影响。

The Influence of Robots' Fairness on Humans' Reward-Punishment Behaviors and Trust in Human-Robot Cooperative Teams.

机构信息

Beijing University of Chemical Technology, Beijing, China.

出版信息

Hum Factors. 2024 Apr;66(4):1103-1117. doi: 10.1177/00187208221133272. Epub 2022 Oct 11.

Abstract

OBJECTIVE

Based on social exchange theory, this study investigates the effects of robots' fairness and social status on humans' reward-punishment behaviors and trust in human-robot interactions.

BACKGROUND

In human-robot teamwork, robots show fair behaviors, dedication (altruistic unfair behaviors), and selfishness (self-interested unfair behaviors), but few studies have discussed the effects of these robots' behaviors on teamwork.

METHOD

This study adopts a 3 (the independent variable is the robot's fairness: self-interested unfair behaviors, fair behaviors, and altruistic unfair behaviors) × 3 (the moderator variable is the robot's social status: superior, peer, and subordinate) experimental design. Each participant and a robot completed the experimental task together through a computer.

RESULTS

When robots have different social statuses, the more altruistic the fairness of the robot, the more reward behaviors, the fewer punishment behaviors, and the higher human-robot trust of humans. Robots' higher social status weakens the influence of their fairness on humans' punishment behaviors. Human-robot trust will increase humans' reward behaviors and decrease humans' punishment behaviors. Humans' reward-punishment behaviors will increase repaired human-robot trust.

CONCLUSION

Robots' fairness has a significant impact on humans' reward-punishment behaviors and trust. Robots' social status moderates the effect of their fair behavior on humans' punishment behavior. There is an interaction between humans' reward-punishment behaviors and trust.

APPLICATION

The study can help to better understand the interaction mechanism of the human-robot team and can better serve the management and cooperation of the human-robot team by appropriately adjusting the robots' fairness and social status.

摘要

目的

基于社会交换理论,本研究探讨了机器人的公平性和社会地位对人机交互中人类奖惩行为和信任的影响。

背景

在人机协作中,机器人表现出公平行为、奉献精神(利他不公平行为)和自私行为(自利不公平行为),但很少有研究讨论这些机器人行为对团队合作的影响。

方法

本研究采用 3(自变量是机器人的公平性:自利不公平行为、公平行为和利他不公平行为)×3(调节变量是机器人的社会地位:优越、平等和从属)实验设计。每个参与者和一个机器人通过计算机共同完成实验任务。

结果

当机器人具有不同的社会地位时,机器人的公平性越利他,人类的奖励行为越多,惩罚行为越少,人机信任度越高。机器人的较高社会地位削弱了其公平性对人类惩罚行为的影响。人机信任会增加人类的奖励行为,减少人类的惩罚行为。人类的奖惩行为会增加修复后的人机信任。

结论

机器人的公平性对人类的奖惩行为和信任有显著影响。机器人的社会地位调节了其公平行为对人类惩罚行为的影响。人类的奖惩行为和信任之间存在相互作用。

应用

该研究可以帮助更好地理解人机团队的交互机制,并通过适当调整机器人的公平性和社会地位,更好地为人机团队的管理和合作服务。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验