Division of Arts and Sciences, New York University Shanghai, Shanghai, China; Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China; Department of Psychology, The University of Chicago, 5848 S. University Ave., Chicago, IL 60637 USA.
Department of Psychology, The University of Chicago, 5848 S. University Ave., Chicago, IL 60637 USA.
Cognition. 2019 Jun;187:178-187. doi: 10.1016/j.cognition.2019.03.004. Epub 2019 Mar 14.
Action and perception interact in complex ways to shape how we learn. In the context of language acquisition, for example, hand gestures can facilitate learning novel sound-to-meaning mappings that are critical to successfully understanding a second language. However, the mechanisms by which motor and visual information influence auditory learning are still unclear. We hypothesize that the extent to which cross-modal learning occurs is directly related to the common representational format of perceptual features across motor, visual, and auditory domains (i.e., the extent to which changes in one domain trigger similar changes in another). Furthermore, to the extent that information across modalities can be mapped onto a common representation, training in one domain may lead to learning in another domain. To test this hypothesis, we taught native English speakers Mandarin tones using directional pitch gestures. Watching or performing gestures that were congruent with pitch direction (e.g., an up gesture moving up, and a down gesture moving down, in the vertical plane) significantly enhanced tone category learning, compared to auditory-only training. Moreover, when gestures were rotated (e.g., an up gesture moving away from the body, and a down gesture moving toward the body, in the horizontal plane), performing the gestures resulted in significantly better learning, compared to watching the rotated gestures. Our results suggest that when a common representational mapping can be established between motor and sensory modalities, auditory perceptual learning is likely to be enhanced.
动作和感知以复杂的方式相互作用,塑造了我们的学习方式。例如,在语言习得的背景下,手势可以促进学习新的声音与意义的映射,这对于成功理解第二语言至关重要。然而,运动和视觉信息影响听觉学习的机制仍不清楚。我们假设,跨模态学习的程度与运动、视觉和听觉领域感知特征的共同表示格式直接相关(即一个领域的变化在多大程度上引发另一个领域的类似变化)。此外,在某种程度上,不同模态的信息可以映射到共同的表示上,那么在一个领域的训练可能会导致另一个领域的学习。为了验证这一假设,我们使用方向音高手势教以英语为母语的人学习普通话声调。与仅听觉训练相比,观看或执行与音高方向一致的手势(例如,在垂直平面上向上的手势向上移动,向下的手势向下移动)显著增强了声调类别学习。此外,当手势旋转时(例如,在水平平面上,向上的手势远离身体移动,向下的手势向身体移动),与观看旋转的手势相比,执行手势会导致更好的学习效果。我们的结果表明,当运动和感觉模态之间可以建立共同的表示映射时,听觉感知学习很可能会得到增强。