Wiskott Laurenz
Computational Neurobiology Laboratory, Salk Institute for Biological Studies, San Diego, CA 92186-5800, USA.
Neural Comput. 2003 Sep;15(9):2147-77. doi: 10.1162/089976603322297331.
Temporal slowness is a learning principle that allows learning of invariant representations by extracting slowly varying features from quickly varying input signals. Slow feature analysis (SFA) is an efficient algorithm based on this principle and has been applied to the learning of translation, scale, and other invariances in a simple model of the visual system. Here, a theoretical analysis of the optimization problem solved by SFA is presented, which provides a deeper understanding of the simulation results obtained in previous studies.
时间慢度是一种学习原则,它通过从快速变化的输入信号中提取缓慢变化的特征来实现对不变表示的学习。慢特征分析(SFA)是基于这一原则的一种有效算法,并且已被应用于视觉系统简单模型中的平移、尺度及其他不变性的学习。本文给出了对SFA所解决的优化问题的理论分析,这为深入理解先前研究中获得的模拟结果提供了帮助。