Zhu Chun Jiang, Storandt Sabine, Lam Kam-Yiu, Han Song, Bi Jinbo
University of Connecticut.
University of Konstanz.
Proc Mach Learn Res. 2019 Jun;97:7624-7633.
Graph sparsification has been used to improve the computational cost of learning over graphs, , Laplacian-regularized estimation, graph semisupervised learning () and spectral clustering (). However, when graphs vary over time, repeated sparsification requires polynomial order computational cost per update. We propose a new type of graph sparsification namely fault-tolerant () sparsification to significantly reduce the cost to only a constant. Then the computational cost of subsequent graph learning tasks can be significantly improved with limited loss in their accuracy. In particular, we give theoretical analysis to upper bound the loss in the accuracy of the subsequent Laplacian-regularized estimation, graph and , due to the FT sparsification. In addition, FT spectral sparsification can be generalized to FT cut sparsification, for cut-based graph learning. Extensive experiments have confirmed the computational efficiencies and accuracies of the proposed methods for learning on dynamic graphs.
图稀疏化已被用于降低图学习的计算成本,如拉普拉斯正则化估计、图半监督学习()和谱聚类()。然而,当图随时间变化时,每次更新重复进行稀疏化需要多项式阶的计算成本。我们提出了一种新型的图稀疏化方法,即容错()稀疏化,以将成本显著降低至常数。这样,后续图学习任务的计算成本在精度损失有限的情况下能够得到显著提高。特别是,我们对由于FT稀疏化导致的后续拉普拉斯正则化估计、图和的精度损失进行了理论分析,给出了其上界。此外,FT谱稀疏化可推广到用于基于割的图学习的FT割稀疏化。大量实验证实了所提方法在动态图学习中的计算效率和准确性。