Deezer Research, Paris, France; LIX, École Polytechnique, Palaiseau, France.
Deezer Research, Paris, France.
Neural Netw. 2021 Oct;142:1-19. doi: 10.1016/j.neunet.2021.04.015. Epub 2021 Apr 27.
Graph autoencoders (AE) and variational autoencoders (VAE) are powerful node embedding methods, but suffer from scalability issues. In this paper, we introduce FastGAE, a general framework to scale graph AE and VAE to large graphs with millions of nodes and edges. Our strategy, based on an effective stochastic subgraph decoding scheme, significantly speeds up the training of graph AE and VAE while preserving or even improving performances. We demonstrate the effectiveness of FastGAE on various real-world graphs, outperforming the few existing approaches to scale graph AE and VAE by a wide margin.
图自动编码器(AE)和变分自动编码器(VAE)是强大的节点嵌入方法,但存在可扩展性问题。在本文中,我们引入了 FastGAE,这是一种将图 AE 和 VAE 扩展到具有数百万个节点和边的大型图的通用框架。我们的策略基于有效的随机子图解码方案,大大加快了图 AE 和 VAE 的训练速度,同时保持甚至提高了性能。我们在各种真实世界的图上证明了 FastGAE 的有效性,大大超过了现有的几种扩展图 AE 和 VAE 的方法。