Pyers Jennie E, Perniss Pamela, Emmorey Karen
Wellesley College, Psychology Department, Wellesley, MA 02481, USA.
University of Brighton, School of Humanities, Checkland Building, BN1 9PH Brighton, UK,
Spat Cogn Comput. 2015 Jun 1;15(3):143-169. doi: 10.1080/13875868.2014.1003933. Epub 2015 Jul 7.
Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer's or the perceiver's. In Experiment 1, ASL signers and sign-naïve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions non-egocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a non-egocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that non-linguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality.
手语以象似性方式表达依赖视角的空间关系(如左、右),但必须约定从谁的视角来描述空间关系,是手语使用者的视角还是感知者的视角。在实验1中,美国手语使用者和不懂手语的手势使用者以自我为中心地表达依赖视角的关系,但只有手语使用者成功地以非自我为中心地解读了描述内容,这表明视觉模态中的视角趋同随着语言约定而出现。在实验2中,我们观察到,对于表达者而言,采用非自我为中心视角的代价要高于感知者,这表明手语已经趋同于表达左右空间关系的最具认知效率的方式。我们认为,诸如视觉采择和运动具身化等非语言认知因素可能会限制视觉空间模态中的视角趋同。