Tang Fengzhen, Fan Mengling, Tino Peter
IEEE Trans Neural Netw Learn Syst. 2021 Jan;32(1):281-292. doi: 10.1109/TNNLS.2020.2978514. Epub 2021 Jan 4.
Learning vector quantization (LVQ) is a simple and efficient classification method, enjoying great popularity. However, in many classification scenarios, such as electroencephalogram (EEG) classification, the input features are represented by symmetric positive-definite (SPD) matrices that live in a curved manifold rather than vectors that live in the flat Euclidean space. In this article, we propose a new classification method for data points that live in the curved Riemannian manifolds in the framework of LVQ. The proposed method alters generalized LVQ (GLVQ) with the Euclidean distance to the one operating under the appropriate Riemannian metric. We instantiate the proposed method for the Riemannian manifold of SPD matrices equipped with the Riemannian natural metric. Empirical investigations on synthetic data and real-world motor imagery EEG data demonstrate that the performance of the proposed generalized learning Riemannian space quantization can significantly outperform the Euclidean GLVQ, generalized relevance LVQ (GRLVQ), and generalized matrix LVQ (GMLVQ). The proposed method also shows competitive performance to the state-of-the-art methods on the EEG classification of motor imagery tasks.
学习向量量化(LVQ)是一种简单高效的分类方法,广受欢迎。然而,在许多分类场景中,例如脑电图(EEG)分类,输入特征由存在于弯曲流形中的对称正定(SPD)矩阵表示,而非存在于平坦欧几里得空间中的向量。在本文中,我们在LVQ框架下为存在于弯曲黎曼流形中的数据点提出了一种新的分类方法。所提出的方法将具有欧几里得距离的广义LVQ(GLVQ)改变为在适当黎曼度量下运行的方法。我们针对配备黎曼自然度量的SPD矩阵的黎曼流形实例化了所提出的方法。对合成数据和真实世界运动想象EEG数据的实证研究表明,所提出的广义学习黎曼空间量化的性能能够显著优于欧几里得GLVQ、广义相关LVQ(GRLVQ)和广义矩阵LVQ(GMLVQ)。在所提出的方法在运动想象任务的EEG分类方面也展现出与最先进方法相竞争的性能。