Suppr超能文献

用于多核聚类的单阶段移位拉普拉斯细化

One-Stage Shifted Laplacian Refining for Multiple Kernel Clustering.

作者信息

You Jiali, Ren Zhenwen, Yu F Richard, You Xiaojian

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11501-11513. doi: 10.1109/TNNLS.2023.3262590. Epub 2024 Aug 5.

Abstract

Graph learning can effectively characterize the similarity structure of sample pairs, hence multiple kernel clustering based on graph learning (MKC-GL) achieves promising results on nonlinear clustering tasks. However, previous methods confine to a "three-stage" scheme, that is, affinity graph learning, Laplacian construction, and clustering indicator extracting, which results in the information distortion in the step alternating. Meanwhile, the energy of Laplacian reconstruction and the necessary cluster information cannot be preserved simultaneously. To address these problems, we propose a one-stage shifted Laplacian refining (OSLR) method for multiple kernel clustering (MKC), where using the "one-stage" scheme focuses on Laplacian learning rather than traditional graph learning. Concretely, our method treats each kernel matrix as an affinity graph rather than ordinary data and constructs its corresponding Laplacian matrix in advance. Compared to the traditional Laplacian methods, we transform each Laplacian to an approximately shifted Laplacian (ASL) for refining a consensus Laplacian. Then, we project the consensus Laplacian onto a Fantope space to ensure that reconstruction information and clustering information concentrate on larger eigenvalues. Theoretically, our OSLR reduces the memory complexity and computation complexity to O(n) and O(n) , respectively. Moreover, experimental results have shown that it outperforms state-of-the-art MKC methods on multiple benchmark datasets.

摘要

图学习能够有效地刻画样本对的相似性结构,因此基于图学习的多核聚类(MKC-GL)在非线性聚类任务上取得了良好的效果。然而,先前的方法局限于一种“三阶段”方案,即亲和图学习、拉普拉斯矩阵构建和聚类指标提取,这导致了在步骤交替过程中的信息失真。同时,拉普拉斯矩阵重建的能量和必要的聚类信息不能同时保留。为了解决这些问题,我们提出了一种用于多核聚类(MKC)的单阶段移位拉普拉斯矩阵细化(OSLR)方法,其中使用“单阶段”方案专注于拉普拉斯矩阵学习而非传统的图学习。具体而言,我们的方法将每个核矩阵视为一个亲和图而非普通数据,并预先构建其相应的拉普拉斯矩阵。与传统的拉普拉斯矩阵方法相比,我们将每个拉普拉斯矩阵转换为一个近似移位拉普拉斯矩阵(ASL)以细化一个共识拉普拉斯矩阵。然后,我们将共识拉普拉斯矩阵投影到一个Fantope空间,以确保重建信息和聚类信息集中在较大的特征值上。从理论上讲,我们的OSLR方法分别将内存复杂度和计算复杂度降低到了O(n)和O(n)。此外,实验结果表明,在多个基准数据集上,它优于现有的MKC方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验