Jiang Bo, Chen Yong, Wang Beibei, Xu Haiyun, Tang Jin
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):16975-16980. doi: 10.1109/TNNLS.2023.3296760. Epub 2024 Oct 29.
Graph convolutional networks (GCNs) have been widely studied to address graph data representation and learning. In contrast to traditional convolutional neural networks (CNNs) that employ many various (spatial) convolution filters to obtain rich feature descriptors to encode complex patterns of image data, GCNs, however, are defined on the input observed graph G(X,A) and usually adopt the single fixed spatial convolution filter for graph data feature extraction. This limits the capacity of the existing GCNs to encode the complex patterns of graph data. To overcome this issue, inspired by depthwise separable convolution and DropEdge operation, we first propose to generate various graph convolution filters by randomly dropping out some edges from the input graph A . Then, we propose a novel graph-dropping convolution layer (GDCLayer) to produce rich feature descriptors for graph data. Using GDCLayer, we finally design a new end-to-end network architecture, that is, a graph-dropping convolutional network (GDCNet), for graph data learning. Experiments on several datasets demonstrate the effectiveness of the proposed GDCNet.
图卷积网络(GCN)已被广泛研究,用于解决图数据表示和学习问题。与传统卷积神经网络(CNN)不同,CNN使用许多不同的(空间)卷积滤波器来获取丰富的特征描述符,以编码图像数据的复杂模式,而GCN是在输入观测图G(X,A)上定义的,通常采用单个固定的空间卷积滤波器进行图数据特征提取。这限制了现有GCN对图数据复杂模式进行编码的能力。为了克服这个问题,受深度可分离卷积和DropEdge操作的启发,我们首先提出通过从输入图A中随机丢弃一些边来生成各种图卷积滤波器。然后,我们提出了一种新颖的图丢弃卷积层(GDCLayer),为图数据生成丰富的特征描述符。使用GDCLayer,我们最终设计了一种新的端到端网络架构,即图丢弃卷积网络(GDCNet),用于图数据学习。在几个数据集上的实验证明了所提出的GDCNet的有效性。