Zhang Rui, Zhang Yunxing, Lu Chengjun, Li Xuelong
IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):5329-5336. doi: 10.1109/TPAMI.2022.3202158. Epub 2023 Mar 7.
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding. However, the performance of GAEs is very dependent on the quality of the graph structure, i.e., of the adjacency matrix. In other words, GAEs would perform poorly when the adjacency matrix is incomplete or be disturbed. In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed. The proposed methods expand the application range of GAEs on graph embedding, i.e, on the general datasets without graph structure. Meanwhile, the adaptive learning mechanism can initialize the adjacency matrix without being affected by the parameter. Besides that, the latent representations are embedded with the Laplacian graph structure to preserve the topology structure of the graph in the vector space. Moreover, the adjacency matrix can be self-learned for better embedding performance when the original graph structure is incomplete. With adaptive learning, the proposed method is much more robust to the graph structure. Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, link prediction, and graph visualization tasks.
图自动编码器(GAE)是用于图嵌入的表示学习中的强大工具。然而,GAE的性能非常依赖于图结构的质量,即邻接矩阵的质量。换句话说,当邻接矩阵不完整或受到干扰时,GAE的性能会很差。本文提出了两种新颖的无监督图嵌入方法,即通过自适应图学习的无监督图嵌入(BAGE)和通过变分自适应图学习的无监督图嵌入(VBAGE)。所提出的方法扩展了GAE在图嵌入方面的应用范围,即在没有图结构的一般数据集上的应用。同时,自适应学习机制可以在不受参数影响的情况下初始化邻接矩阵。除此之外,潜在表示通过拉普拉斯图结构进行嵌入,以在向量空间中保留图的拓扑结构。此外,当原始图结构不完整时,邻接矩阵可以进行自学习以获得更好的嵌入性能。通过自适应学习,所提出的方法对图结构具有更强的鲁棒性。在几个数据集上的实验研究验证了我们的设计,并表明我们的方法在节点聚类、节点分类、链接预测和图可视化任务中比基线方法有很大的优势。