College of Intelligence Science and Technology, National University of Defense Technology, Changsha, Hunan, China; Wellcome Centre for Human Neuroimaging, UCL Institute of Neurology, University College London, London, United Kingdom.
Wellcome Centre for Human Neuroimaging, UCL Institute of Neurology, University College London, London, United Kingdom.
Neuroimage. 2020 May 1;211:116595. doi: 10.1016/j.neuroimage.2020.116595. Epub 2020 Feb 3.
This paper asks whether integrating multimodal EEG and fMRI data offers a better characterisation of functional brain architectures than either modality alone. This evaluation rests upon a dynamic causal model that generates both EEG and fMRI data from the same neuronal dynamics. We introduce the use of Bayesian fusion to provide informative (empirical) neuronal priors - derived from dynamic causal modelling (DCM) of EEG data - for subsequent DCM of fMRI data. To illustrate this procedure, we generated synthetic EEG and fMRI timeseries for a mismatch negativity (or auditory oddball) paradigm, using biologically plausible model parameters (i.e., posterior expectations from a DCM of empirical, open access, EEG data). Using model inversion, we found that Bayesian fusion provided a substantial improvement in marginal likelihood or model evidence, indicating a more efficient estimation of model parameters, in relation to inverting fMRI data alone. We quantified the benefits of multimodal fusion with the information gain pertaining to neuronal and haemodynamic parameters - as measured by the Kullback-Leibler divergence between their prior and posterior densities. Remarkably, this analysis suggested that EEG data can improve estimates of haemodynamic parameters; thereby furnishing proof-of-principle that Bayesian fusion of EEG and fMRI is necessary to resolve conditional dependencies between neuronal and haemodynamic estimators. These results suggest that Bayesian fusion may offer a useful approach that exploits the complementary temporal (EEG) and spatial (fMRI) precision of different data modalities. We envisage the procedure could be applied to any multimodal dataset that can be explained by a DCM with a common neuronal parameterisation.
本文探讨了整合多模态 EEG 和 fMRI 数据是否比单一模态能更好地描述功能大脑结构。这种评估依赖于一个动态因果模型,该模型可以从相同的神经元动力学生成 EEG 和 fMRI 数据。我们引入了贝叶斯融合的使用,为随后的 fMRI 数据的 DCM 提供了有用的(经验)神经元先验信息 - 这些先验信息是从 EEG 数据的 DCM 中得出的。为了说明这个过程,我们使用了具有生物学合理性的模型参数(即,从经验性、开放获取的 EEG 数据的 DCM 中得出的后验期望),为错配负波(或听觉偏离)范式生成了合成的 EEG 和 fMRI 时间序列。使用模型反演,我们发现贝叶斯融合在边际似然或模型证据方面提供了显著的改进,这表明在单独反演 fMRI 数据时,模型参数的估计效率更高。我们通过神经元和血液动力学参数的信息增益来量化多模态融合的好处 - 这是通过它们的先验和后验密度之间的 Kullback-Leibler 散度来衡量的。值得注意的是,这种分析表明 EEG 数据可以改善血液动力学参数的估计;从而提供了贝叶斯融合 EEG 和 fMRI 的原理证明,即需要融合来解决神经元和血液动力学估计器之间的条件依赖性。这些结果表明,贝叶斯融合可能是一种有用的方法,它利用了不同数据模态的互补的时间(EEG)和空间(fMRI)精度。我们设想该过程可以应用于任何可以通过具有共同神经元参数化的 DCM 来解释的多模态数据集。