Xu Lixiang, Liu Haifeng, Yuan Xin, Chen Enhong, Tang Yuanyan
IEEE Trans Cybern. 2024 Dec;54(12):7320-7332. doi: 10.1109/TCYB.2024.3465213. Epub 2024 Nov 27.
While highly influential in deep learning, especially in natural language processing, the Transformer model has not exhibited competitive performance in unsupervised graph representation learning (UGRL). Conventional approaches, which focus on local substructures on the graph, offer simplicity but often fall short in encapsulating comprehensive structural information of the graph. This deficiency leads to suboptimal generalization performance. To address this, we proposed the GraKerformer model, a variant of the standard Transformer architecture, to mitigate the shortfall in structural information representation and enhance the performance in UGRL. By leveraging the shortest-path graph kernel (SPGK) to weight attention scores and combining graph neural networks, the GraKerformer effectively encodes the nuanced structural information of graphs. We conducted evaluations on the benchmark datasets for graph classification to validate the superior performance of our approach.
虽然Transformer模型在深度学习,特别是在自然语言处理方面具有高度影响力,但它在无监督图表示学习(UGRL)中尚未展现出有竞争力的性能。传统方法专注于图上的局部子结构,虽然简单,但在封装图的全面结构信息方面往往有所欠缺。这种不足导致泛化性能欠佳。为了解决这个问题,我们提出了GraKerformer模型,它是标准Transformer架构的一个变体,以减轻结构信息表示方面的不足,并提高UGRL中的性能。通过利用最短路径图核(SPGK)对注意力分数进行加权,并结合图神经网络,GraKerformer有效地编码了图的细微结构信息。我们在图分类的基准数据集上进行了评估,以验证我们方法的卓越性能。