College of Computer and Data Science, Fuzhou University, Fuzhou 350108, China; Key Laboratory of Intelligent Metro, Fujian Province University, Fuzhou 350108, China.
Neural Netw. 2024 Dec;180:106648. doi: 10.1016/j.neunet.2024.106648. Epub 2024 Aug 22.
In multi-view learning, graph-based methods like Graph Convolutional Network (GCN) are extensively researched due to effective graph processing capabilities. However, most GCN-based methods often require complex preliminary operations such as sparsification, which may bring additional computation costs and training difficulties. Additionally, as the number of stacking layers increases in most GCN, over-smoothing problem arises, resulting in ineffective utilization of GCN capabilities. In this paper, we propose an attention-based stackable graph convolutional network that captures consistency across views and combines attention mechanism to exploit the powerful aggregation capability of GCN to effectively mitigate over-smoothing. Specifically, we introduce node self-attention to establish dynamic connections between nodes and generate view-specific representations. To maintain cross-view consistency, a data-driven approach is devised to assign attention weights to views, forming a common representation. Finally, based on residual connectivity, we apply an attention mechanism to the original projection features to generate layer-specific complementarity, which compensates for the information loss during graph convolution. Comprehensive experimental results demonstrate that the proposed method outperforms other state-of-the-art methods in multi-view semi-supervised tasks.
在多视图学习中,基于图的方法(如图卷积网络(GCN))由于具有有效的图处理能力而受到广泛研究。然而,大多数基于 GCN 的方法通常需要复杂的初步操作,如稀疏化,这可能会带来额外的计算成本和训练难度。此外,由于大多数 GCN 中的堆叠层数增加,会出现过平滑问题,导致 GCN 功能的利用效率低下。在本文中,我们提出了一种基于注意力的可堆叠图卷积网络,该网络可以捕获视图之间的一致性,并结合注意力机制来利用 GCN 的强大聚合能力,有效地减轻过平滑问题。具体来说,我们引入节点自注意力来建立节点之间的动态连接,并生成视图特定的表示。为了保持跨视图一致性,我们设计了一种数据驱动的方法来为视图分配注意力权重,形成共同的表示。最后,基于残差连接,我们在原始投影特征上应用注意力机制来生成层特定的互补性,从而补偿图卷积过程中的信息损失。综合实验结果表明,该方法在多视图半监督任务中优于其他最新方法。