Bigand Félix, Bianco Roberta, Abalde Sara F, Nguyen Trinh, Novembre Giacomo
Neuroscience of Perception & Action Lab, Italian Institute of Technology, Rome 00161, Italy
Neuroscience of Perception & Action Lab, Italian Institute of Technology, Rome 00161, Italy.
J Neurosci. 2025 May 21;45(21):e2372242025. doi: 10.1523/JNEUROSCI.2372-24.2025.
Real-world social cognition requires processing and adapting to multiple dynamic information streams. Interpreting neural activity in such ecological conditions remains a key challenge for neuroscience. This study leverages advancements in denoising techniques and multivariate modeling to extract interpretable EEG signals from pairs of (male and/or female) participants engaged in spontaneous dyadic dance. Using multivariate temporal response functions (mTRFs), we investigated how music acoustics, self-generated kinematics, other-generated kinematics, and social coordination uniquely contributed to EEG activity. Electromyogram recordings from ocular, face, and neck muscles were also modeled to control for artifacts. The mTRFs effectively disentangled neural signals associated with four processes: (I) auditory tracking of music, (II) control of self-generated movements, (III) visual monitoring of partner movements, and (IV) visual tracking of social coordination. We show that the first three neural signals are driven by event-related potentials: the P50-N100-P200 triggered by acoustic events, the central lateralized movement-related cortical potentials triggered by movement initiation, and the occipital N170 triggered by movement observation. Notably, the (previously unknown) neural marker of social coordination encodes the spatiotemporal alignment between dancers, surpassing the encoding of self- or partner-related kinematics taken alone. This marker emerges when partners can see each other, exhibits a topographical distribution over occipital areas, and is specifically driven by movement observation rather than initiation. Using data-driven kinematic decomposition, we further show that vertical bounce movements best drive observers' EEG activity. These findings highlight the potential of real-world neuroimaging, combined with multivariate modeling, to uncover the mechanisms underlying complex yet natural social behaviors.
现实世界中的社会认知需要处理并适应多个动态信息流。在这种生态条件下解读神经活动仍然是神经科学面临的一项关键挑战。本研究利用去噪技术和多变量建模的进展,从参与自发双人舞蹈的(男性和/或女性)参与者对中提取可解释的脑电图信号。使用多变量时间响应函数(mTRF),我们研究了音乐声学、自我产生的运动学、他人产生的运动学和社会协调如何独特地促成脑电图活动。还对眼部、面部和颈部肌肉的肌电图记录进行建模以控制伪迹。mTRF有效地解开了与四个过程相关的神经信号:(I)对音乐的听觉跟踪,(II)对自我产生运动的控制,(III)对伙伴运动的视觉监测,以及(IV)对社会协调的视觉跟踪。我们表明,前三个神经信号由事件相关电位驱动:由声学事件触发的P50-N100-P200、由运动启动触发的中央侧化运动相关皮层电位,以及由运动观察触发的枕叶N170。值得注意的是,社会协调的(此前未知的)神经标志物编码了舞者之间的时空对齐,超越了单独的自我或伙伴相关运动学的编码。当伙伴能够看到彼此时,这个标志物就会出现,在枕叶区域呈现出地形分布,并且特别由运动观察而非启动驱动。使用数据驱动的运动学分解,我们进一步表明垂直弹跳运动最能驱动观察者的脑电图活动。这些发现突出了现实世界神经成像与多变量建模相结合揭示复杂但自然的社会行为背后机制的潜力。