van Maris Anouk, Zook Nancy, Caleb-Solly Praminda, Studley Matthew, Winfield Alan, Dogramadzi Sanja
Bristol Robotics Laboratory, University of the West of England, Bristol, United Kingdom.
Department of Health and Social Sciences, University of the West of England, Bristol, United Kingdom.
Front Robot AI. 2020 Jan 24;7:1. doi: 10.3389/frobt.2020.00001. eCollection 2020.
Emotional deception and emotional attachment are regarded as ethical concerns in human-robot interaction. Considering these concerns is essential, particularly as little is known about longitudinal effects of interactions with social robots. We ran a longitudinal user study with older adults in two retirement villages, where people interacted with a robot in a didactic setting for eight sessions over a period of 4 weeks. The robot would show either non-emotive or emotive behavior during these interactions in order to investigate emotional deception. Questionnaires were given to investigate participants' acceptance of the robot, perception of the social interactions with the robot and attachment to the robot. Results show that the robot's behavior did not seem to influence participants' acceptance of the robot, perception of the interaction or attachment to the robot. Time did not appear to influence participants' level of attachment to the robot, which ranged from low to medium. The perceived ease of using the robot significantly increased over time. These findings indicate that a robot showing emotions-and perhaps resulting in users being deceived-in a didactic setting may not by default negatively influence participants' acceptance and perception of the robot, and that older adults may not become distressed if the robot would break or be taken away from them, as attachment to the robot in this didactic setting was not high. However, more research is required as there may be other factors influencing these ethical concerns, and support through other measurements than questionnaires is required to be able to draw conclusions regarding these concerns.
在人机交互中,情感欺骗和情感依恋被视为伦理问题。考虑这些问题至关重要,尤其是对于与社交机器人互动的长期影响我们知之甚少。我们在两个退休村对老年人进行了一项长期用户研究,在为期4周的时间里,人们在教学场景中与机器人进行了8次互动。在这些互动过程中,机器人会表现出非情感或情感行为,以研究情感欺骗。通过问卷调查来调查参与者对机器人的接受程度、对与机器人社交互动的感知以及对机器人的依恋程度。结果显示,机器人的行为似乎并未影响参与者对机器人的接受程度、对互动的感知或对机器人的依恋程度。时间似乎也未影响参与者对机器人的依恋程度,其依恋程度从低到中等。随着时间的推移,参与者对机器人易用性的感知显著提高。这些发现表明,在教学场景中表现出情感(可能会导致用户被欺骗)的机器人不一定会默认对参与者对机器人的接受程度和感知产生负面影响,并且在这种教学场景中,由于对机器人的依恋程度不高,如果机器人损坏或被拿走,老年人可能不会感到困扰。然而,由于可能存在其他影响这些伦理问题的因素,并且需要通过问卷调查以外的其他测量方法来提供支持,以便能够就这些问题得出结论,因此还需要进行更多的研究。