Versteeg Christopher, Sedler Andrew R, McCart Jonathan D, Pandarinath Chethan
Wallace H. Coulter Department of Biomedical Engineering Emory University and Georgia Institute of Technology Atlanta, GA, USA.
Center for Machine Learning Georgia Institute of Technology Atlanta, GA, USA.
ArXiv. 2023 Sep 12:arXiv:2309.06402v1.
The advent of large-scale neural recordings has enabled new approaches that aim to discover the computational mechanisms of neural circuits by understanding the rules that govern how their state evolves over time. While these cannot be directly measured, they can typically be approximated by low-dimensional models in a latent space. How these models represent the mapping from latent space to neural space can affect the interpretability of the latent representation. We show that typical choices for this mapping (e.g., linear or MLP) often lack the property of injectivity, meaning that changes in latent state are not obligated to affect activity in the neural space. During training, non-injective readouts incentivize the invention of dynamics that misrepresent the underlying system and the computation it performs. Combining our injective Flow readout with prior work on interpretable latent dynamics models, we created the Ordinary Differential equations autoencoder with Injective Nonlinear readout (ODIN), which learns to capture latent dynamical systems that are nonlinearly embedded into observed neural activity via an approximately injective nonlinear mapping. We show that ODIN can recover nonlinearly embedded systems from simulated neural activity, even when the nature of the system and embedding are unknown. Additionally, we show that ODIN enables the unsupervised recovery of underlying dynamical features (e.g., fixed points) and embedding geometry. When applied to biological neural recordings, ODIN can reconstruct neural activity with comparable accuracy to previous state-of-the-art methods while using substantially fewer latent dimensions. Overall, ODIN's accuracy in recovering ground-truth latent features and ability to accurately reconstruct neural activity with low dimensionality make it a promising method for distilling interpretable dynamics that can help explain neural computation.
大规模神经记录技术的出现,催生了一些新方法,旨在通过理解支配神经回路状态随时间演变的规则,来发现神经回路的计算机制。虽然这些规则无法直接测量,但通常可以在潜在空间中用低维模型进行近似。这些模型从潜在空间到神经空间的映射方式会影响潜在表征的可解释性。我们发现,这种映射的典型选择(例如线性或多层感知器)往往缺乏单射性,这意味着潜在状态的变化不一定会影响神经空间中的活动。在训练过程中,非单射读出会促使模型发明出歪曲底层系统及其执行的计算的动力学。我们将单射流读出与先前关于可解释潜在动力学模型的工作相结合,创建了具有单射非线性读出的常微分方程自动编码器(ODIN),它通过近似单射的非线性映射,学习捕捉非线性嵌入到观测神经活动中的潜在动力学系统。我们表明,即使系统和嵌入的性质未知,ODIN也能从模拟神经活动中恢复非线性嵌入系统。此外,我们还表明,ODIN能够无监督地恢复潜在动力学特征(例如不动点)和嵌入几何结构。当应用于生物神经记录时,ODIN能够以与先前最先进方法相当的精度重建神经活动,同时使用的潜在维度要少得多。总体而言,ODIN在恢复真实潜在特征方面的准确性以及以低维度准确重建神经活动的能力,使其成为一种很有前景的方法,可用于提炼有助于解释神经计算的可解释动力学。