Dechsling Anders, Cogo-Moreira Hugo, Gangestad Jonathan Spydevold, Johannessen Sandra Nettum, Nordahl-Hansen Anders
Department of Education, ICT and Learning, Faculty of Teacher Education and Languages, Østfold University College, Halden, Norway.
Department of Behavioral Sciences, Oslo Metropolitan University, Oslo, Norway.
JMIR Form Res. 2023 May 11;7:e44632. doi: 10.2196/44632.
The availability and potential of virtual reality (VR) has led to an increase of its application. VR is suggested to be helpful in training elements of social competence but with an emphasis on interventions being tailored. Recognizing facial expressions is an important social skill and thus a target for training. Using VR in training these skills could have advantages over desktop alternatives. Children with autism, for instance, appear to prefer avatars over real images when assessing facial expressions. Available software provides the opportunity to transform profile pictures into avatars, thereby giving the possibility of tailoring according to an individual's own environment. However, the emotions provided by such software should be validated before application.
Our aim was to investigate whether available software is a quick, easy, and viable way of providing emotion expressions in avatars transformed from real images.
A total of 401 participants from a general population completed a survey on the web containing 27 different images of avatars transformed, using a software, from real images. We calculated the reliability of each image and their level of difficulty using a structural equation modeling approach. We used Bayesian confirmatory factor analysis testing under a multidimensional first-order correlated factor structure where faces showing the same emotions represented a latent variable.
Few emotions were correctly perceived and rated as higher than other emotions. The factor loadings indicating the discrimination of the image were around 0.7, which means 49% shared variance with the latent factor that the face is linked with. The standardized thresholds indicating the difficulty level of the images are mostly around average, and the highest correlation is between faces showing happiness and anger.
Only using a software to transform profile pictures to avatars is not sufficient to provide valid emotion expressions. Adjustments are needed to increase faces' discrimination (eg, increasing reliabilities). The faces showed average levels of difficulty, meaning that they are neither very difficult nor very easy to perceive, which fits a general population. Adjustments should be made for specific populations and when applying this technology in clinical practice.
虚拟现实(VR)的可用性和潜力促使其应用不断增加。VR被认为有助于社交能力要素的训练,但重点是干预措施要量身定制。识别面部表情是一项重要的社交技能,因此是训练的目标。在这些技能的训练中使用VR可能比桌面替代方案更具优势。例如,自闭症儿童在评估面部表情时似乎更喜欢虚拟形象而非真实图像。现有软件提供了将个人资料图片转换为虚拟形象的机会,从而有可能根据个人自身环境进行定制。然而,此类软件提供的情感在应用前应进行验证。
我们的目的是研究现有软件是否是一种快速、简便且可行的方式,用于在从真实图像转换而来的虚拟形象中提供情感表达。
共有401名来自普通人群的参与者在网上完成了一项调查,该调查包含使用一款软件从真实图像转换而来的27种不同虚拟形象图片。我们使用结构方程建模方法计算了每张图片的可靠性及其难度水平。我们在多维一阶相关因子结构下使用贝叶斯验证性因子分析进行测试,其中显示相同情感的面部代表一个潜在变量。
很少有情感被正确感知并被评为高于其他情感。表明图片辨别力的因子载荷约为0.7,这意味着与面部所关联的潜在因子有49%的共同方差。表明图片难度水平的标准化阈值大多接近平均水平,且显示快乐和愤怒的面部之间相关性最高。
仅使用软件将个人资料图片转换为虚拟形象不足以提供有效的情感表达。需要进行调整以提高面部的辨别力(例如,提高可靠性)。这些面部显示出平均难度水平,这意味着它们既不是很难也不是很容易被感知,这适合普通人群。对于特定人群以及在临床实践中应用这项技术时应进行调整。