Suppr超能文献

快速 Haar 变换用于图神经网络。

Fast Haar Transforms for Graph Neural Networks.

机构信息

Department of Educational Technology, Zhejiang Normal University, Jinhua, China; School of Mathematics and Statistics, The University of New South Wales, Sydney, Australia.

Department of Physics, Princeton University, NJ, USA.

出版信息

Neural Netw. 2020 Aug;128:188-198. doi: 10.1016/j.neunet.2020.04.028. Epub 2020 May 4.

Abstract

Graph Neural Networks (GNNs) have become a topic of intense research recently due to their powerful capability in high-dimensional classification and regression tasks for graph-structured data. However, as GNNs typically define the graph convolution by the orthonormal basis for the graph Laplacian, they suffer from high computational cost when the graph size is large. This paper introduces a Haar basis, which is a sparse and localized orthonormal system for a coarse-grained chain on the graph. The graph convolution under Haar basis, called Haar convolution, can be defined accordingly for GNNs. The sparsity and locality of the Haar basis allow Fast Haar Transforms (FHTs) on the graph, by which one then achieves a fast evaluation of Haar convolution between graph data and filters. We conduct experiments on GNNs equipped with Haar convolution, which demonstrates state-of-the-art results on graph-based regression and node classification tasks.

摘要

图神经网络 (GNNs) 由于在图结构数据的高维分类和回归任务中具有强大的能力,最近成为研究的热点。然而,由于 GNNs 通常通过图拉普拉斯的正交基来定义图卷积,因此当图的大小较大时,它们的计算成本很高。本文引入了 Haar 基,它是图上粗粒度链的稀疏且局部化的正交系统。在 Haar 基下的图卷积,称为 Haar 卷积,可以相应地为 GNNs 定义。Haar 基的稀疏性和局部性允许在图上进行快速 Haar 变换 (FHT),从而实现图数据和滤波器之间的 Haar 卷积的快速评估。我们在配备 Haar 卷积的 GNNs 上进行实验,结果表明在基于图的回归和节点分类任务中取得了最先进的结果。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验