• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过格罗莫夫-瓦瑟斯坦分解表示图

Representing Graphs via Gromov-Wasserstein Factorization.

作者信息

Xu Hongteng, Liu Jiachang, Luo Dixin, Carin Lawrence

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):999-1016. doi: 10.1109/TPAMI.2022.3153126. Epub 2022 Dec 5.

DOI:10.1109/TPAMI.2022.3153126
PMID:35196227
Abstract

Graph representation is a challenging and significant problem for many real-world applications. In this work, we propose a novel paradigm called "Gromov-Wasserstein Factorization" (GWF) to learn graph representations in a flexible and interpretable way. Given a set of graphs, whose correspondence between nodes is unknown and whose sizes can be different, our GWF model reconstructs each graph by a weighted combination of some "graph factors" under a pseudo-metric called Gromov-Wasserstein (GW) discrepancy. This model leads to a new nonlinear factorization mechanism of the graphs. The graph factors are shared by all the graphs, which represent the typical patterns of the graphs' structures. The weights associated with each graph indicate the graph factors' contributions to the graph's reconstruction, which lead to a permutation-invariant graph representation. We learn the graph factors of the GWF model and the weights of the graphs jointly by minimizing the overall reconstruction error. When learning the model, we reparametrize the graph factors and the weights to unconstrained model parameters and simplify the backpropagation of gradient with the help of the envelope theorem. For the GW discrepancy (the critical training step), we consider two algorithms to compute it, which correspond to the proximal point algorithm (PPA) and Bregman alternating direction method of multipliers (BADMM), respectively. Furthermore, we propose some extensions of the GWF model, including (i) combining with a graph neural network and learning graph representations in an auto-encoding manner, (ii) representing the graphs with node attributes, and (iii) working as a regularizer for semi-supervised graph classification. Experiments on various datasets demonstrate that our GWF model is comparable to the state-of-the-art methods. The graph representations derived by it perform well in graph clustering and classification tasks.

摘要

对于许多实际应用而言,图表示是一个具有挑战性且意义重大的问题。在这项工作中,我们提出了一种名为“格罗莫夫 - 瓦瑟斯坦分解”(GWF)的新颖范式,以灵活且可解释的方式学习图表示。给定一组图,其节点之间的对应关系未知且大小可能不同,我们的GWF模型在一种称为格罗莫夫 - 瓦瑟斯坦(GW)差异的伪度量下,通过一些“图因子”的加权组合来重构每个图。该模型导致了一种新的图的非线性分解机制。图因子由所有图共享,代表了图结构的典型模式。与每个图相关联的权重表示图因子对图重构的贡献,从而得到一种置换不变的图表示。我们通过最小化整体重构误差来联合学习GWF模型的图因子和图的权重。在学习模型时,我们将图因子和权重重新参数化为无约束的模型参数,并借助包络定理简化梯度的反向传播。对于GW差异(关键的训练步骤),我们考虑两种算法来计算它,分别对应于近端点算法(PPA)和布雷格曼交替方向乘子法(BADMM)。此外,我们提出了GWF模型的一些扩展,包括(i)与图神经网络相结合并以自动编码方式学习图表示,(ii)用节点属性表示图,以及(iii)作为半监督图分类的正则化器。在各种数据集上的实验表明,我们的GWF模型与当前的先进方法相当。由它导出的图表示在图聚类和分类任务中表现良好。

相似文献

1
Representing Graphs via Gromov-Wasserstein Factorization.通过格罗莫夫-瓦瑟斯坦分解表示图
IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):999-1016. doi: 10.1109/TPAMI.2022.3153126. Epub 2022 Dec 5.
2
Wasserstein Discriminant Dictionary Learning for Graph Representation.用于图表示的瓦瑟斯坦判别字典学习
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):8619-8635. doi: 10.1109/TPAMI.2024.3409772. Epub 2024 Nov 6.
3
muxGNN: Multiplex Graph Neural Network for Heterogeneous Graphs.muxGNN:用于异构图的多路复用图神经网络。
IEEE Trans Pattern Anal Mach Intell. 2023 Sep;45(9):11067-11078. doi: 10.1109/TPAMI.2023.3263079. Epub 2023 Aug 7.
4
CommPOOL: An interpretable graph pooling framework for hierarchical graph representation learning.CommPOOL:一种可解释的图池化框架,用于层次图表示学习。
Neural Netw. 2021 Nov;143:669-677. doi: 10.1016/j.neunet.2021.07.028. Epub 2021 Jul 29.
5
MGLNN: Semi-supervised learning via Multiple Graph Cooperative Learning Neural Networks.MGLNN:基于多图协同学习神经网络的半监督学习。
Neural Netw. 2022 Sep;153:204-214. doi: 10.1016/j.neunet.2022.05.024. Epub 2022 Jun 3.
6
KHGCN: Knowledge-Enhanced Recommendation with Hierarchical Graph Capsule Network.KHGCN:基于层次图胶囊网络的知识增强推荐
Entropy (Basel). 2023 Apr 20;25(4):697. doi: 10.3390/e25040697.
7
Self-supervised contrastive graph representation with node and graph augmentation.自监督对比图表示与节点和图增强。
Neural Netw. 2023 Oct;167:223-232. doi: 10.1016/j.neunet.2023.08.039. Epub 2023 Aug 24.
8
Patient Representation Learning From Heterogeneous Data Sources and Knowledge Graphs Using Deep Collective Matrix Factorization: Evaluation Study.使用深度集体矩阵分解从异构数据源和知识图谱中进行患者表示学习:评估研究
JMIR Med Inform. 2022 Jan 20;10(1):e28842. doi: 10.2196/28842.
9
Factorization and pseudofactorization of weighted graphs.加权图的因式分解与伪因式分解
Discrete Appl Math. 2023 Oct 15;337:81-105. doi: 10.1016/j.dam.2023.04.019. Epub 2023 May 8.
10
Augmented Graph Neural Network with hierarchical global-based residual connections.基于层次全局残差连接的增强图神经网络。
Neural Netw. 2022 Jun;150:149-166. doi: 10.1016/j.neunet.2022.03.008. Epub 2022 Mar 10.

引用本文的文献

1
Biolinguistic graph fusion model for circRNA-miRNA association prediction.生物语言学图融合模型用于 circRNA-miRNA 关联预测。
Brief Bioinform. 2024 Jan 22;25(2). doi: 10.1093/bib/bbae058.