De Lucia Marzia, Camen Christian, Clarke Stephanie, Murray Micah M
Electroencephalography Brain Mapping Core, Center for Biomedical Imaging of Lausanne and Geneva, Switzerland.
Neuroimage. 2009 Nov 1;48(2):475-85. doi: 10.1016/j.neuroimage.2009.06.041. Epub 2009 Jun 24.
Action representations can interact with object recognition processes. For example, so-called mirror neurons respond both when performing an action and when seeing or hearing such actions. Investigations of auditory object processing have largely focused on categorical discrimination, which begins within the initial 100 ms post-stimulus onset and subsequently engages distinct cortical networks. Whether action representations themselves contribute to auditory object recognition and the precise kinds of actions recruiting the auditory-visual mirror neuron system remain poorly understood. We applied electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to sounds of man-made objects that were further subdivided between sounds conveying a socio-functional context and typically cuing a responsive action by the listener (e.g. a ringing telephone) and those that are not linked to such a context and do not typically elicit responsive actions (e.g. notes on a piano). This distinction was validated psychophysically by a separate cohort of listeners. Beginning approximately 300 ms, responses to such context-related sounds significantly differed from context-free sounds both in the strength and topography of the electric field. This latency is >200 ms subsequent to general categorical discrimination. Additionally, such topographic differences indicate that sounds of different action sub-types engage distinct configurations of intracranial generators. Statistical analysis of source estimations identified differential activity within premotor and inferior (pre)frontal regions (Brodmann's areas (BA) 6, BA8, and BA45/46/47) in response to sounds of actions typically cuing a responsive action. We discuss our results in terms of a spatio-temporal model of auditory object processing and the interplay between semantic and action representations.
动作表征可以与物体识别过程相互作用。例如,所谓的镜像神经元在执行动作以及看到或听到此类动作时都会做出反应。对听觉物体处理的研究主要集中在类别辨别上,这种辨别在刺激开始后的最初100毫秒内开始,随后涉及不同的皮质网络。动作表征本身是否有助于听觉物体识别,以及招募视听镜像神经元系统的精确动作类型仍知之甚少。我们对听觉诱发电位(AEP)进行了电神经成像分析,以回应人造物体的声音,这些声音进一步细分为传达社会功能背景并通常提示听众做出反应动作的声音(例如电话铃声)和那些与这种背景无关且通常不会引发反应动作的声音(例如钢琴音符)。这一区别通过另一组听众进行了心理物理学验证。大约从300毫秒开始,对这种与背景相关声音的反应在电场强度和地形图上与无背景声音有显著差异。这个潜伏期在一般类别辨别之后超过200毫秒。此外,这种地形差异表明不同动作子类型的声音涉及颅内发生器不同的配置。源估计的统计分析确定了运动前区和额下(前)区(布罗德曼区(BA)6、BA8和BA45/46/47)内的差异活动,以回应通常提示反应动作的动作声音。我们根据听觉物体处理的时空模型以及语义和动作表征之间的相互作用来讨论我们的结果。