Department of Informatics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece.
Neuroinformatics GRoup, Aristotle University of Thessaloniki, Thessaloniki, Greece.
J Neural Eng. 2021 Jun 2;18(4). doi: 10.1088/1741-2552/abffe6.
The aesthetic evaluation of music is strongly dependent on the listener and reflects manifold brain processes that go well beyond the perception of incident sound. Being a high-level cognitive reaction, it is difficult to predict merely from the acoustic features of the audio signal and this poses serious challenges to contemporary music recommendation systems. We attempted to decode music appraisal from brain activity, recorded via wearable EEG, during music listening.To comply with the dynamic nature of music stimuli, cross-frequency coupling measurements were employed in a time-evolving manner to capture the evolving interactions between distinct brain-rhythms during music listening. Brain response to music was first represented as a continuous flow of functional couplings referring to both regional and inter-regional brain dynamics and then modelled as an ensemble of time-varying (sub)networks. Dynamic graph centrality measures were derived, next, as the final feature-engineering step and, lastly, a support-vector machine was trained to decode the subjective music appraisal. A carefully designed experimental paradigm provided the labeled brain signals.Using data from 20 subjects, dynamic programming to tailor the decoder to each subject individually and cross-validation, we demonstrated highly satisfactory performance (MAE= 0.948,= 0.63) that can be attributed, mostly, to interactions of left frontal gamma rhythm. In addition, our music-appraisal decoder was also employed in a part of the DEAP dataset with similar success. Finally, even a generic version of the decoder (common for all subjects) was found to perform sufficiently.A novel brain signal decoding scheme was introduced and validated empirically on suitable experimental data. It requires simple operations and leaves room for real-time implementation. Both the code and the experimental data are publicly available.
音乐的审美评价强烈依赖于听众,反映了多种超出声音感知的大脑过程。作为一种高级认知反应,仅凭音频信号的声学特征很难预测,这对当代音乐推荐系统提出了严峻的挑战。我们试图通过可穿戴 EEG 记录的大脑活动来解码音乐欣赏,以适应音乐刺激的动态特性,采用时变的跨频耦合测量方法来捕获音乐聆听过程中不同大脑节律之间的动态交互。大脑对音乐的反应首先表示为功能耦合的连续流动,涉及区域和区域间的大脑动力学,然后建模为时变(子)网络的集合。接下来,作为最终的特征工程步骤,推导出动态图中心性度量,并最后使用支持向量机对主观音乐评估进行解码。精心设计的实验范式提供了标记的大脑信号。使用 20 名受试者的数据,通过动态编程针对每个受试者进行解码器定制和交叉验证,我们展示了非常令人满意的性能(MAE=0.948,=0.63),这主要归因于左额伽马节律的相互作用。此外,我们的音乐评估解码器还在 DEAP 数据集的一部分中取得了类似的成功。最后,即使是解码器的通用版本(适用于所有受试者)也表现得足够好。提出了一种新的脑信号解码方案,并在合适的实验数据上进行了实证验证。它需要简单的操作,并为实时实现留出空间。代码和实验数据都是公开的。