Suppr超能文献

自动图神经网络:图神经网络的神经架构搜索

Auto-GNN: Neural architecture search of graph neural networks.

作者信息

Zhou Kaixiong, Huang Xiao, Song Qingquan, Chen Rui, Hu Xia

机构信息

DATA Lab, Department of Computer Science, Rice University, Houston, TX, United States.

Department of Computing, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.

出版信息

Front Big Data. 2022 Nov 17;5:1029307. doi: 10.3389/fdata.2022.1029307. eCollection 2022.

Abstract

Graph neural networks (GNNs) have been widely used in various graph analysis tasks. As the graph characteristics vary significantly in real-world systems, given a specific scenario, the architecture parameters need to be tuned carefully to identify a suitable GNN. Neural architecture search (NAS) has shown its potential in discovering the effective architectures for the learning tasks in image and language modeling. However, the existing NAS algorithms cannot be applied efficiently to GNN search problem because of two facts. First, the large-step exploration in the traditional controller fails to learn the sensitive performance variations with slight architecture modifications in GNNs. Second, the search space is composed of heterogeneous GNNs, which prevents the direct adoption of parameter sharing among them to accelerate the search progress. To tackle the challenges, we propose an automated graph neural networks (AGNN) framework, which aims to find the optimal GNN architecture efficiently. Specifically, a reinforced conservative controller is designed to explore the architecture space with small steps. To accelerate the validation, a novel constrained parameter sharing strategy is presented to regularize the weight transferring among GNNs. It avoids training from scratch and saves the computation time. Experimental results on the benchmark datasets demonstrate that the architecture identified by AGNN achieves the best performance and search efficiency, comparing with existing human-invented models and the traditional search methods.

摘要

图神经网络(GNN)已广泛应用于各种图分析任务中。由于在现实世界系统中图的特征差异很大,在特定场景下,需要仔细调整架构参数以确定合适的GNN。神经架构搜索(NAS)已在发现图像和语言建模学习任务的有效架构方面展现出潜力。然而,由于两个事实,现有的NAS算法无法有效地应用于GNN搜索问题。首先,传统控制器中的大步探索无法学习到GNN中架构稍有修改时的敏感性能变化。其次,搜索空间由异构GNN组成,这阻止了在它们之间直接采用参数共享来加速搜索进程。为应对这些挑战,我们提出了一个自动图神经网络(AGNN)框架,旨在高效地找到最优的GNN架构。具体而言,设计了一个强化保守控制器以小步探索架构空间。为加速验证过程,提出了一种新颖的受限参数共享策略来规范GNN之间的权重传递。它避免了从头开始训练并节省了计算时间。在基准数据集上的实验结果表明,与现有的人工发明模型和传统搜索方法相比,AGNN识别出的架构具有最佳性能和搜索效率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d16/9714572/d6cb5c49bd7d/fdata-05-1029307-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验