Shen Chunhua, Kim Junae, Wang Lei
NICTA, Canberra Research Laboratory, ACT, Australia.
IEEE Trans Neural Netw. 2010 Sep;21(9):1524-30. doi: 10.1109/TNN.2010.2052630. Epub 2010 Aug 12.
For many machine learning algorithms such as k-nearest neighbor ( k-NN) classifiers and k-means clustering, often their success heavily depends on the metric used to calculate distances between different data points. An effective solution for defining such a metric is to learn it from a set of labeled training samples. In this work, we propose a fast and scalable algorithm to learn a Mahalanobis distance metric. The Mahalanobis metric can be viewed as the Euclidean distance metric on the input data that have been linearly transformed. By employing the principle of margin maximization to achieve better generalization performances, this algorithm formulates the metric learning as a convex optimization problem and a positive semidefinite (p.s.d.) matrix is the unknown variable. Based on an important theorem that a p.s.d. trace-one matrix can always be represented as a convex combination of multiple rank-one matrices, our algorithm accommodates any differentiable loss function and solves the resulting optimization problem using a specialized gradient descent procedure. During the course of optimization, the proposed algorithm maintains the positive semidefiniteness of the matrix variable that is essential for a Mahalanobis metric. Compared with conventional methods like standard interior-point algorithms or the special solver used in large margin nearest neighbor , our algorithm is much more efficient and has a better performance in scalability. Experiments on benchmark data sets suggest that, compared with state-of-the-art metric learning algorithms, our algorithm can achieve a comparable classification accuracy with reduced computational complexity.
对于许多机器学习算法,如k近邻(k-NN)分类器和k均值聚类,它们的成功往往在很大程度上取决于用于计算不同数据点之间距离的度量。定义这样一种度量的有效解决方案是从一组带标签的训练样本中学习它。在这项工作中,我们提出了一种快速且可扩展的算法来学习马氏距离度量。马氏度量可以看作是对经过线性变换的输入数据的欧几里得距离度量。通过采用间隔最大化原则以实现更好的泛化性能,该算法将度量学习公式化为一个凸优化问题,并且一个正定(p.s.d.)矩阵是未知变量。基于一个重要定理,即一个迹为1的p.s.d.矩阵总能表示为多个秩为1的矩阵的凸组合,我们的算法适用于任何可微损失函数,并使用专门的梯度下降过程来解决由此产生的优化问题。在优化过程中,所提出的算法保持矩阵变量的正定性,这对于马氏度量至关重要。与传统方法如标准内点算法或大间隔最近邻中使用的特殊求解器相比,我们的算法效率更高,并且在可扩展性方面具有更好的性能。在基准数据集上的实验表明,与当前最先进的度量学习算法相比,我们的算法可以在降低计算复杂度的情况下实现相当的分类准确率。