Department of Neurology, Emory University, Atlanta, Georgia, United States of America.
PLoS One. 2007 Sep 12;2(9):e890. doi: 10.1371/journal.pone.0000890.
Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch.
METHODOLOGY/PRINCIPAL FINDINGS: Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180 degrees about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross-modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y-axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores.
CONCLUSIONS/SIGNIFICANCE: The viewpoint-independence of cross-modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross-modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch.
先前的研究表明,视觉和触觉物体识别在同一模态内和跨模态之间都是依赖于视角的。然而,这一结论可能并不普遍适用,因为它是通过沿物体的延伸 y 轴定向的物体得出的,这导致了视觉和触觉中不同的表面处理。在本研究中,我们通过沿 z 轴呈现物体来消除这种差异,从而使所有物体表面更均匀地为视觉和触觉所接受。
方法/主要发现:参与者使用视觉或触觉,分四组研究以前不熟悉的物体。随后,他们使用未旋转和旋转(x、y 和 z 轴各旋转 180 度)的方式呈现研究过的物体,进行了四项选择强制选择物体识别任务。旋转在视觉和触觉中都降低了同一模态的识别准确性,但不影响跨模态的识别准确性。在同一模态中,视觉识别准确性因绕 x 轴和 y 轴的旋转而降低的程度大于绕 z 轴的旋转,而触觉识别则受到绕所有三个轴的旋转的影响相同。跨模态(而非同一模态)的准确性与空间(而非物体)意象分数相关。
结论/意义:跨模态物体识别的视角独立性表明它是由高级抽象表示介导的。空间意象分数与跨模态表现之间的相关性表明,这种高级表示的构建与执行空间变换的能力有关。在同一模态内,视角依赖性在视觉中与触觉中的基础可能不同,这可能是由于表面遮挡在视觉中很重要,而在触觉中不重要。