Lu Jing, Guo Sijia, Chen Mingming, Wang Weixia, Yang Hua, Guo Daqing, Yao Dezhong
The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China Department of Composition, Sichuan Conservatory of Music, Chengdu, Sichuan, China.
Medicine (Baltimore). 2018 Jan;97(2):e9628. doi: 10.1097/MD.0000000000009628.
Many methods have been developed to translate a human electroencephalogram (EEG) into music. In addition to EEG, functional magnetic resonance imaging (fMRI) is another method used to study the brain and can reflect physiological processes. In 2012, we established a method to use simultaneously recorded fMRI and EEG signals to produce EEG-fMRI music, which represents a step toward scale-free brain music. In this study, we used a neural mass model, the Jansen-Rit model, to simulate activity in several cortical brain regions. The interactions between different brain regions were represented by the average normalized diffusion tensor imaging (DTI) structural connectivity with a coupling coefficient that modulated the coupling strength. Seventy-eight brain regions were adopted from the Automated Anatomical Labeling (AAL) template. Furthermore, we used the Balloon-Windkessel hemodynamic model to transform neural activity into a blood-oxygen-level dependent (BOLD) signal. Because the fMRI BOLD signal changes slowly, we used a sampling rate of 250 Hz to produce the temporal series for music generation. Then, the BOLD music was generated for each region using these simulated BOLD signals. Because the BOLD signal is scale free, these music pieces were also scale free, which is similar to classic music. Here, to simulate the case of an epileptic patient, we changed the parameter that determined the amplitude of the excitatory postsynaptic potential (EPSP) in the neural mass model. Finally, we obtained BOLD music for healthy and epileptic patients. The differences in levels of arousal between the 2 pieces of music may provide a potential tool for discriminating the different populations if the differences can be confirmed by more real data.
人们已经开发出多种方法将人类脑电图(EEG)转化为音乐。除了EEG,功能磁共振成像(fMRI)是另一种用于研究大脑且能反映生理过程的方法。2012年,我们建立了一种利用同时记录的fMRI和EEG信号来制作EEG - fMRI音乐的方法,这朝着无标度脑音乐迈出了一步。在本研究中,我们使用神经团块模型——詹森 - 里特模型来模拟几个大脑皮质区域的活动。不同脑区之间的相互作用由平均归一化扩散张量成像(DTI)结构连接性表示,并通过一个调节耦合强度的耦合系数来体现。我们从自动解剖标记(AAL)模板中选取了78个脑区。此外,我们使用球囊 - 风箱血流动力学模型将神经活动转化为血氧水平依赖(BOLD)信号。由于fMRI的BOLD信号变化缓慢,我们使用250Hz的采样率来生成用于音乐创作的时间序列。然后,利用这些模拟的BOLD信号为每个区域生成BOLD音乐。由于BOLD信号是无标度的,这些音乐片段也是无标度的,这与经典音乐相似。在此,为了模拟癫痫患者的情况,我们改变了神经团块模型中决定兴奋性突触后电位(EPSP)幅度的参数。最后,我们获得了健康人和癫痫患者的BOLD音乐。如果这些差异能被更多实际数据所证实,那么这两首音乐在唤醒水平上的差异可能为区分不同人群提供一种潜在工具。