CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Shenzhen, China.
Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region.
PLoS One. 2019 Jan 28;14(1):e0210858. doi: 10.1371/journal.pone.0210858. eCollection 2019.
The deficit in speech sound production in some children with autism spectrum disorder (ASD) adds to their communication barriers. The 3-D virtual environments have been implemented to improve their communication abilities. However, there were no previous studies on the use of a 3-D virtual pronunciation tutor designed specifically to train pronunciation for children with ASD. To fill this research gap, the current study developed and evaluated a 3-D virtual tutor which served as a multimodal and real-data-driven speech production tutor to present both places and manners of Mandarin articulation. Using an eye-tracking technique (RED 5 Eye Tracker), Experiment 1 objectively investigated children's gauged attention distribution online while learning with our computer-assisted 3-D virtual tutor in comparison to a real human face (HF) tutor. Eye-tracking results indicated most participants showed more interests in the visual speech cues of the 3-D tutor, and paid some degree of absolute attention to the additional visual speech information of both articulatory movements and airflow changes. To further compare treatment outcomes, training performance was evaluated in Experiment 2 with the ASD learners divided into two groups, with one group learning from the HF tutor and the other from the 3-D tutor (HF group vs. 3-D group). Both groups showed improvement with the help of computer-based training in the post-intervention test based on the calculation of a 5-point Likert scale. However, the 3-D group showed much higher gains in producing Mandarin stop and affricate consonants, and apical vowels. We conclude that our 3-D virtual imitation intervention system provides an effective approach of audiovisual pronunciation training for children with ASD.
一些自闭症谱系障碍(ASD)儿童在言语产生方面存在缺陷,这增加了他们的交流障碍。已经实施了 3D 虚拟环境来提高他们的沟通能力。然而,以前没有研究过使用专门为 ASD 儿童设计的 3D 虚拟发音导师来训练发音。为了填补这一研究空白,本研究开发并评估了一种 3D 虚拟导师,它作为一种多模态和真实数据驱动的言语产生导师,呈现了普通话发音的位置和方式。使用眼动技术(RED 5 眼动追踪器),实验 1 客观地调查了儿童在使用我们的计算机辅助 3D 虚拟导师与真实人脸(HF)导师进行在线学习时的注意力分配。眼动追踪结果表明,大多数参与者对 3D 导师的视觉言语线索更感兴趣,并对发音动作和气流变化的额外视觉言语信息给予一定程度的绝对关注。为了进一步比较治疗效果,实验 2 将 ASD 学习者分为两组,一组从 HF 导师学习,另一组从 3D 导师学习(HF 组与 3D 组)。两组都在基于 5 点李克特量表计算的干预后测试中,在计算机辅助训练的帮助下表现出了改善。然而,3D 组在产生普通话塞音和塞擦音以及舌尖元音方面的表现有了显著提高。我们得出结论,我们的 3D 虚拟模仿干预系统为 ASD 儿童提供了一种有效的视听发音训练方法。