Department of Computer Science, and HIIT, University of Helsinki, Helsinki, Finland.
Philos Trans A Math Phys Eng Sci. 2012 Dec 31;371(1984):20110534. doi: 10.1098/rsta.2011.0534. Print 2013 Feb 13.
Independent component analysis is a probabilistic method for learning a linear transform of a random vector. The goal is to find components that are maximally independent and non-Gaussian (non-normal). Its fundamental difference to classical multi-variate statistical methods is in the assumption of non-Gaussianity, which enables the identification of original, underlying components, in contrast to classical methods. The basic theory of independent component analysis was mainly developed in the 1990s and summarized, for example, in our monograph in 2001. Here, we provide an overview of some recent developments in the theory since the year 2000. The main topics are: analysis of causal relations, testing independent components, analysing multiple datasets (three-way data), modelling dependencies between the components and improved methods for estimating the basic model.
独立成分分析是一种概率方法,用于学习随机向量的线性变换。其目的是找到最大程度独立且非高斯(非正态)的成分。与经典的多元统计方法的根本区别在于非高斯性的假设,这使得能够识别原始的、潜在的成分,而这是经典方法所无法做到的。独立成分分析的基本理论主要是在 20 世纪 90 年代发展起来的,并在我们 2001 年的专著中进行了总结。这里,我们提供了自 2000 年以来该理论的一些最新发展的概述。主要的主题是:因果关系分析、独立成分检验、多数据集(三向数据)分析、成分之间的依赖性建模和改进的基本模型估计方法。