Suppr超能文献

从大脑活动中识别认知计算模型时需留意噪声。

Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.

作者信息

Kolossa Antonio, Kopp Bruno

机构信息

Department of Neurology, Hannover Medical School Hannover, Germany.

出版信息

Front Neurosci. 2016 Dec 27;10:573. doi: 10.3389/fnins.2016.00573. eCollection 2016.

Abstract

The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.

摘要

本研究的目的是分析测量误差如何影响计算神经科学中建模研究的有效性。以模拟的P300事件相关电位为例创建了一个综合有效性测试。模型空间包括四个单试次P300振幅波动的计算模型,这些模型在复杂性和依赖性方面存在差异。基于其中一个模型,在不同测量误差水平和不同数据点数的情况下,计算模拟P300振幅的单试次波动。基于超越概率进行贝叶斯模型选择。在数据点数量非常少的情况下,最不复杂的模型通常比数据生成模型表现更好。如果获胜模型的预测变量与数据生成模型的预测变量密切相关,在数据质量较低和数据点数量较少的情况下也会出现无效模型识别。在有足够的数据质量和数据点数量的情况下,即使与数据生成模型非常相似的模型竞争,数据生成模型也能被正确识别。因此,有许多变量会影响计算建模研究的有效性,数据质量和数据点数量是与该问题相关的主要因素。此外,模型空间的性质(即模型复杂性、模型依赖性)也不应被忽视。本研究提供了定量结果,表明通过充分准备的研究确保计算建模有效性的重要性。建议在未来应用中完成综合有效性测试。除此之外,我们建议将通过充分模拟证明足够的有效性作为计算建模研究的强制性要求。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5214/5186787/951a38228dda/fnins-10-00573-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验