Utrecht University, Department of Special Education: Cognitive and Motor Disabilities, Heidelberglaan 1, CS Utrecht, the Netherlands.
University of Amsterdam, Amsterdam Center for Language and Communication, Spuistraat,VB Amsterdam, the Netherlands.
PLoS One. 2019 Dec 19;14(12):e0217833. doi: 10.1371/journal.pone.0217833. eCollection 2019.
Robots are used for language tutoring increasingly often, and commonly programmed to display non-verbal communicative cues such as eye gaze and pointing during robot-child interactions. With a human speaker, children rely more strongly on non-verbal cues (pointing) than on verbal cues (labeling) if these cues are in conflict. However, we do not know how children weigh the non-verbal cues of a robot. Here, we assessed whether four- to six-year-old children (i) differed in their weighing of non-verbal cues (pointing, eye gaze) and verbal cues provided by a robot versus a human; (ii) weighed non-verbal cues differently depending on whether these contrasted with a novel or familiar label; and (iii) relied differently on a robot's non-verbal cues depending on the degree to which they attributed human-like properties to the robot. The results showed that children generally followed pointing over labeling, in line with earlier research. Children did not rely more strongly on the non-verbal cues of a robot versus those of a human. Regarding pointing, children who perceived the robot as more human-like relied on pointing more strongly when it contrasted with a novel label versus a familiar label, but children who perceived the robot as less human-like did not show this difference. Regarding eye gaze, children relied more strongly on the gaze cue when it contrasted with a novel versus a familiar label, and no effect of anthropomorphism was found. Taken together, these results show no difference in the degree to which children rely on non-verbal cues of a robot versus those of a human and provide preliminary evidence that differences in anthropomorphism may interact with children's reliance on a robot's non-verbal behaviors.
机器人越来越多地被用于语言辅导,通常被编程为在机器人与儿童互动时显示非言语交际线索,例如眼神交流和指点。与人类说话者相比,如果这些线索存在冲突,儿童更依赖于非言语线索(指点)而不是言语线索(标签)。然而,我们不知道儿童如何权衡机器人的非言语线索。在这里,我们评估了 4 至 6 岁的儿童是否存在以下情况:(i)他们对机器人提供的非言语线索(指点、眼神交流)和言语线索的重视程度与人类不同;(ii)根据这些线索是否与新的或熟悉的标签形成对比,对非言语线索的重视程度是否有所不同;(iii)根据他们对机器人赋予类人属性的程度,对机器人的非言语线索的依赖程度是否不同。结果表明,儿童通常遵循指点而非标签,这与早期的研究结果一致。儿童并不更依赖于机器人的非言语线索而不是人类的非言语线索。关于指点,当机器人被感知为更像人类时,与新标签相比,与熟悉标签相比,儿童更依赖指点,而当机器人被感知为不那么像人类时,他们并没有表现出这种差异。关于眼神交流,当与新标签相比时,儿童更依赖注视线索,而与熟悉标签相比时,没有发现拟人化的影响。总的来说,这些结果表明,儿童对机器人非言语线索的依赖程度与对人类非言语线索的依赖程度没有差异,并提供了初步证据表明,拟人化的差异可能与儿童对机器人非言语行为的依赖程度相互作用。