Bekele Esubalew, Crittendon Julie, Zheng Zhi, Swanson Amy, Weitlauf Amy, Warren Zachary, Sarkar Nilanjan
Department of Electrical Engineering and Computer Science, Vanderbilt University, 518 Olin Hall 2400 Highland Avenue, Nashville, TN, 37212, USA,
J Autism Dev Disord. 2014 Jul;44(7):1641-50. doi: 10.1007/s10803-014-2035-8.
Teenagers with autism spectrum disorder (ASD) and age-matched controls participated in a dynamic facial affect recognition task within a virtual reality (VR) environment. Participants identified the emotion of a facial expression displayed at varied levels of intensity by a computer generated avatar. The system assessed performance (i.e., accuracy, confidence ratings, response latency, and stimulus discrimination) as well as how participants used their gaze to process facial information using an eye tracker. Participants in both groups were similarly accurate at basic facial affect recognition at varied levels of intensity. Despite similar performance characteristics, ASD participants endorsed lower confidence in their responses and substantial variation in gaze patterns in absence of perceptual discrimination deficits. These results add support to the hypothesis that deficits in emotion and face recognition for individuals with ASD are related to fundamental differences in information processing. We discuss implications of this finding in a VR environment with regards to potential future applications and paradigms targeting not just enhanced performance, but enhanced social information processing within intelligent systems capable of adaptation to individual processing differences.
患有自闭症谱系障碍(ASD)的青少年和年龄匹配的对照组参与了虚拟现实(VR)环境中的动态面部表情识别任务。参与者识别由计算机生成的虚拟角色以不同强度显示的面部表情所表达的情绪。该系统评估了表现(即准确性、信心评级、反应潜伏期和刺激辨别),以及参与者如何使用眼动仪通过注视来处理面部信息。两组参与者在不同强度的基本面部表情识别方面同样准确。尽管表现特征相似,但ASD参与者对自己的反应信心较低,并且在没有感知辨别缺陷的情况下注视模式存在显著差异。这些结果支持了这样一种假设,即ASD个体在情绪和面部识别方面的缺陷与信息处理的根本差异有关。我们讨论了这一发现在VR环境中的意义,涉及潜在的未来应用和范式,这些应用和范式不仅旨在提高表现,还旨在增强智能系统中的社会信息处理能力,使其能够适应个体处理差异。