Department of Bioengineering and Centre for Neurotechnology, Imperial College London, London, SW7 2AZ, United Kingdom.
Centre for Discovery Brain Sciences, The University of Edinburgh, Edinburgh, EH8 9XD, United Kingdom.
J Comput Neurosci. 2023 Feb;51(1):1-21. doi: 10.1007/s10827-022-00839-3. Epub 2022 Dec 16.
Recent developments in experimental neuroscience make it possible to simultaneously record the activity of thousands of neurons. However, the development of analysis approaches for such large-scale neural recordings have been slower than those applicable to single-cell experiments. One approach that has gained recent popularity is neural manifold learning. This approach takes advantage of the fact that often, even though neural datasets may be very high dimensional, the dynamics of neural activity tends to traverse a much lower-dimensional space. The topological structures formed by these low-dimensional neural subspaces are referred to as "neural manifolds", and may potentially provide insight linking neural circuit dynamics with cognitive function and behavioral performance. In this paper we review a number of linear and non-linear approaches to neural manifold learning, including principal component analysis (PCA), multi-dimensional scaling (MDS), Isomap, locally linear embedding (LLE), Laplacian eigenmaps (LEM), t-SNE, and uniform manifold approximation and projection (UMAP). We outline these methods under a common mathematical nomenclature, and compare their advantages and disadvantages with respect to their use for neural data analysis. We apply them to a number of datasets from published literature, comparing the manifolds that result from their application to hippocampal place cells, motor cortical neurons during a reaching task, and prefrontal cortical neurons during a multi-behavior task. We find that in many circumstances linear algorithms produce similar results to non-linear methods, although in particular cases where the behavioral complexity is greater, non-linear methods tend to find lower-dimensional manifolds, at the possible expense of interpretability. We demonstrate that these methods are applicable to the study of neurological disorders through simulation of a mouse model of Alzheimer's Disease, and speculate that neural manifold analysis may help us to understand the circuit-level consequences of molecular and cellular neuropathology.
实验神经科学的最新进展使得同时记录数千个神经元的活动成为可能。然而,针对这种大规模神经记录的分析方法的发展速度却比适用于单细胞实验的方法慢。最近流行的一种方法是神经流形学习。这种方法利用了这样一个事实,即尽管神经数据集可能具有非常高的维度,但神经活动的动力学往往在低得多的维度空间中进行。由这些低维神经子空间形成的拓扑结构被称为“神经流形”,它可能为将神经电路动力学与认知功能和行为表现联系起来提供启示。在本文中,我们回顾了几种线性和非线性的神经流形学习方法,包括主成分分析(PCA)、多维尺度(MDS)、等距映射(Isomap)、局部线性嵌入(LLE)、拉普拉斯特征映射(LEM)、t-SNE 和一致流形逼近和投影(UMAP)。我们在一个通用的数学命名法下概述了这些方法,并比较了它们在神经数据分析中的优缺点。我们将它们应用于一些来自已发表文献的数据集,比较了它们应用于海马体位置细胞、进行伸手任务的运动皮层神经元和进行多行为任务的前额叶皮层神经元所产生的流形。我们发现,在许多情况下,线性算法产生的结果与非线性方法相似,尽管在行为复杂性较大的特定情况下,非线性方法往往会找到更低维的流形,但可能会牺牲可解释性。我们通过模拟阿尔茨海默病的小鼠模型证明了这些方法适用于神经紊乱的研究,并推测神经流形分析可能有助于我们理解分子和细胞神经病理学对电路水平的影响。