Mastrantuono Eliana, Saldaña David, Rodríguez-Ortiz Isabel R
Departamento de Psicología Evolutiva y de la Educación, Universidad de SevillaSeville, Spain.
Front Psychol. 2017 Jun 21;8:1044. doi: 10.3389/fpsyg.2017.01044. eCollection 2017.
An eye tracking experiment explored the gaze behavior of deaf individuals when perceiving language in spoken and sign language only, and in sign-supported speech (SSS). Participants were deaf ( = 25) and hearing ( = 25) Spanish adolescents. Deaf students were prelingually profoundly deaf individuals with cochlear implants (CIs) used by age 5 or earlier, or prelingually profoundly deaf native signers with deaf parents. The effectiveness of SSS has rarely been tested within the same group of children for discourse-level comprehension. Here, video-recorded texts, including spatial descriptions, were alternately transmitted in spoken language, sign language and SSS. The capacity of these communicative systems to equalize comprehension in deaf participants with that of spoken language in hearing participants was tested. Within-group analyses of deaf participants tested if the bimodal linguistic input of SSS favored discourse comprehension compared to unimodal languages. Deaf participants with CIs achieved equal comprehension to hearing controls in all communicative systems while deaf native signers with no CIs achieved equal comprehension to hearing participants if tested in their native sign language. Comprehension of SSS was not increased compared to spoken language, even when spatial information was communicated. Eye movements of deaf and hearing participants were tracked and data of dwell times spent looking at the face or body area of the sign model were analyzed. Within-group analyses focused on differences between native and non-native signers. Dwell times of hearing participants were equally distributed across upper and lower areas of the face while deaf participants mainly looked at the mouth area; this could enable information to be obtained from mouthings in sign language and from lip-reading in SSS and spoken language. Few fixations were directed toward the signs, although these were more frequent when spatial language was transmitted. Both native and non-native signers looked mainly at the face when perceiving sign language, although non-native signers looked significantly more at the body than native signers. This distribution of gaze fixations suggested that deaf individuals - particularly native signers - mainly perceived signs through peripheral vision.
一项眼动追踪实验探究了聋人在仅通过口语、手语以及手语辅助口语(SSS)来感知语言时的注视行为。参与者为25名失聪的和25名听力正常的西班牙青少年。失聪学生中,一部分是5岁及更早植入人工耳蜗(CI)的语前极重度聋患者,另一部分是父母为聋人的语前极重度聋的手语母语使用者。SSS在同一组儿童话语层面理解方面的有效性很少得到测试。在此,包括空间描述在内的视频记录文本以口语、手语和SSS的形式交替传输。测试了这些交流系统使聋人参与者的理解与听力正常参与者的口语理解达到同等水平的能力。对聋人参与者的组内分析检验了与单峰语言相比,SSS的双峰语言输入是否有利于话语理解。植入人工耳蜗的聋人参与者在所有交流系统中都达到了与听力正常对照组同等的理解水平,而未植入人工耳蜗的手语母语聋人参与者如果用他们的母语手语进行测试,也达到了与听力正常参与者同等的理解水平。即使在传达空间信息时,与口语相比,SSS的理解也没有提高。对聋人和听力正常参与者的眼动进行了追踪,并分析了注视标志模型面部或身体区域的停留时间数据。组内分析聚焦于母语手语使用者和非母语手语使用者之间的差异。听力正常参与者的停留时间在面部的上半部分和下半部分均匀分布,而聋人参与者主要注视嘴部区域;这可能使他们能够从手语中的口型以及SSS和口语中的唇读中获取信息。尽管在传输空间语言时注视标志的情况更频繁,但很少有注视指向标志。在感知手语时,母语手语使用者和非母语手语使用者都主要注视面部,不过非母语手语使用者注视身体的次数明显多于母语手语使用者。这种注视点的分布表明,聋人个体——尤其是母语手语使用者——主要通过周边视觉来感知手语。