Xie Chengxin, Huang Jingui, Shi Yongjiang, Pang Hui, Gao Liting, Wen Xiumei
Hebei University of Architecture, Zhangjiakou, China.
Hunan Normal University, ChangSha, China.
PeerJ Comput Sci. 2025 Jan 22;11:e2648. doi: 10.7717/peerj-cs.2648. eCollection 2025.
Graph auto-encoders are a crucial research area within graph neural networks, commonly employed for generating graph embeddings while minimizing errors in unsupervised learning. Traditional graph auto-encoders focus on reconstructing minimal graph data loss to encode neighborhood information for each node, yielding node embedding representations. However, existing graph auto-encoder models often overlook node representations and fail to capture contextual node information within the graph data, resulting in poor embedding effects. Accordingly, this study proposes the ensemble graph auto-encoders (E-GAE) model. It utilizes the ensemble random walk graph auto-encoder, the random walk graph auto-encoder of the ensemble network, and the graph attention auto-encoder to generate three node embedding matrices Z. Then, these techniques are combined using adaptive weights to reconstruct a new node embedding matrix. This method addresses the problem of low-quality embeddings. The model's performance is evaluated using three publicly available datasets (Cora, Citeseer, and PubMed), indicating its effectiveness through multiple experiments. It achieves up to a 2.0% improvement in the link prediction task and a 9.4% enhancement in the clustering task. Our code for this work can be found at https://github.com/xcgydfjjjderg/graphautoencoder.
图自动编码器是图神经网络中的一个关键研究领域,通常用于在无监督学习中生成图嵌入,同时将误差最小化。传统的图自动编码器专注于重建最小的图数据损失,以编码每个节点的邻域信息,从而产生节点嵌入表示。然而,现有的图自动编码器模型常常忽略节点表示,无法捕捉图数据中的上下文节点信息,导致嵌入效果不佳。因此,本研究提出了集成图自动编码器(E-GAE)模型。它利用集成随机游走图自动编码器、集成网络的随机游走图自动编码器和图注意力自动编码器来生成三个节点嵌入矩阵Z。然后,使用自适应权重将这些技术组合起来,以重建一个新的节点嵌入矩阵。该方法解决了嵌入质量低的问题。使用三个公开可用的数据集(Cora、Citeseer和PubMed)对模型的性能进行评估,通过多次实验表明了其有效性。在链接预测任务中,它实现了高达2.0%的提升,在聚类任务中提高了9.4%。这项工作的代码可在https://github.com/xcgydfjjjderg/graphautoencoder上找到。