O'Neill George C, Seymour Robert A, Mellor Stephanie, Alexander Nicholas A, Tierney Tim M, Bernachot Léa, Fahimi Hnazaee Mansoureh, Spedden Meaghan E, Timms Ryan C, Bush Daniel, Bestmann Sven, Brookes Matthew J, Barnes Gareth R
Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom.
Department of Imaging Neuroscience, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom.
Imaging Neurosci (Camb). 2025 Mar 3;3. doi: 10.1162/imag_a_00495. eCollection 2025.
Neuroimaging studies have typically relied on rigorously controlled experimental paradigms to probe cognition, in which movement is restricted, primitive, an afterthought or merely used to indicate a subject's choice. Whilst powerful, these paradigms do not often resemble how we behave in everyday life, so a new generation of ecologically valid experiments are being developed. Magnetoencephalography (MEG) measures neural activity by sensing extracranial magnetic fields. It has recently been transformed from a large, static imaging modality to a wearable method where participants can move freely. This makes wearable MEG systems a prime candidate for naturalistic experiments going forward. However, these experiments will also require novel methods to capture and integrate information about behaviour executed during neuroimaging, and it is not yet clear how this could be achieved. Here, we use video recordings of multi-limb dance moves, processed with open-source machine learning methods, to automatically identify time windows of interest in concurrent, wearable MEG data. In a first step, we compare a traditional, block-designed analysis of limb movements, where the times of interest are based on stimulus presentation, to an analysis pipeline based on hidden Markov model states derived from the video telemetry. Next, we show that it is possible to identify discrete modes of neuronal activity related to specific limbs and body posture by processing the participants' choreographed movement in a dancing paradigm. This demonstrates the potential of combining video telemetry with mobile magnetoencephalography and other legacy imaging methods for future studies of complex and naturalistic behaviours.
神经影像学研究通常依赖于严格控制的实验范式来探究认知,在这些范式中,运动受到限制、形式原始、被视为事后考虑因素或仅仅用于表明受试者的选择。虽然这些范式很强大,但它们并不常常类似于我们在日常生活中的行为方式,因此正在开发新一代具有生态效度的实验。脑磁图(MEG)通过感应颅外磁场来测量神经活动。它最近已从一种大型的静态成像方式转变为一种可穿戴方法,参与者可以自由移动。这使得可穿戴脑磁图系统成为未来自然主义实验的主要候选者。然而,这些实验还需要新颖的方法来捕捉和整合有关神经成像过程中执行的行为的信息,目前尚不清楚如何实现这一点。在这里,我们使用多肢体舞蹈动作的视频记录,通过开源机器学习方法进行处理,以自动识别同步的可穿戴脑磁图数据中感兴趣的时间窗口。第一步,我们将基于刺激呈现的传统肢体运动块设计分析与基于视频遥测得出的隐马尔可夫模型状态的分析流程进行比较。接下来,我们表明通过在舞蹈范式中处理参与者编排的动作,可以识别与特定肢体和身体姿势相关的离散神经活动模式。这证明了将视频遥测与移动脑磁图及其他传统成像方法相结合用于未来复杂自然行为研究的潜力。