Suppr超能文献

通过容错稀疏化改进动态图学习

Improved Dynamic Graph Learning through Fault-Tolerant Sparsification.

作者信息

Zhu Chun Jiang, Storandt Sabine, Lam Kam-Yiu, Han Song, Bi Jinbo

机构信息

University of Connecticut.

University of Konstanz.

出版信息

Proc Mach Learn Res. 2019 Jun;97:7624-7633.

Abstract

Graph sparsification has been used to improve the computational cost of learning over graphs, , Laplacian-regularized estimation, graph semisupervised learning () and spectral clustering (). However, when graphs vary over time, repeated sparsification requires polynomial order computational cost per update. We propose a new type of graph sparsification namely fault-tolerant () sparsification to significantly reduce the cost to only a constant. Then the computational cost of subsequent graph learning tasks can be significantly improved with limited loss in their accuracy. In particular, we give theoretical analysis to upper bound the loss in the accuracy of the subsequent Laplacian-regularized estimation, graph and , due to the FT sparsification. In addition, FT spectral sparsification can be generalized to FT cut sparsification, for cut-based graph learning. Extensive experiments have confirmed the computational efficiencies and accuracies of the proposed methods for learning on dynamic graphs.

摘要

图稀疏化已被用于降低图学习的计算成本,如拉普拉斯正则化估计、图半监督学习()和谱聚类()。然而,当图随时间变化时,每次更新重复进行稀疏化需要多项式阶的计算成本。我们提出了一种新型的图稀疏化方法,即容错()稀疏化,以将成本显著降低至常数。这样,后续图学习任务的计算成本在精度损失有限的情况下能够得到显著提高。特别是,我们对由于FT稀疏化导致的后续拉普拉斯正则化估计、图和的精度损失进行了理论分析,给出了其上界。此外,FT谱稀疏化可推广到用于基于割的图学习的FT割稀疏化。大量实验证实了所提方法在动态图学习中的计算效率和准确性。

相似文献

2
Deformed graph laplacian for semisupervised learning.用于半监督学习的变形图拉普拉斯。
IEEE Trans Neural Netw Learn Syst. 2015 Oct;26(10):2261-74. doi: 10.1109/TNNLS.2014.2376936. Epub 2015 Jan 15.
3
Faster Cut Sparsification of Weighted Graphs.加权图的更快割稀疏化
Algorithmica. 2023;85(4):929-964. doi: 10.1007/s00453-022-01053-4. Epub 2022 Nov 1.
5
Ricci Curvature-Based Graph Sparsification for Continual Graph Representation Learning.基于里奇曲率的图稀疏化用于连续图表示学习
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17398-17410. doi: 10.1109/TNNLS.2023.3303454. Epub 2024 Dec 2.
6
Laplacian embedded regression for scalable manifold regularization.拉普拉斯嵌入回归的可扩展流形正则化。
IEEE Trans Neural Netw Learn Syst. 2012 Jun;23(6):902-15. doi: 10.1109/TNNLS.2012.2190420.
7
Robust Graph Learning From Noisy Data.从噪声数据中进行稳健的图学习。
IEEE Trans Cybern. 2020 May;50(5):1833-1843. doi: 10.1109/TCYB.2018.2887094. Epub 2019 Jan 8.
8
Hierarchical Representation Learning in Graph Neural Networks With Node Decimation Pooling.基于节点抽取池化的图神经网络分层表示学习
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2195-2207. doi: 10.1109/TNNLS.2020.3044146. Epub 2022 May 2.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验