Suppr超能文献

基于耦合 P 系统的改进多视图注意网络的节点分类。

An improved multi-view attention network inspired by coupled P system for node classification.

机构信息

Business School, Shandong Normal University, Jinan, China.

出版信息

PLoS One. 2022 Apr 28;17(4):e0267565. doi: 10.1371/journal.pone.0267565. eCollection 2022.

Abstract

Most of the existing graph embedding methods are used to describe the single view network and solve the single relation in the network. However, the real world is made up of networks with multiple views of complex relationships, and the existing methods can no longer meet the needs of people. To solve this problem, we propose a novel multi-view attention network inspired by coupled P system(MVAN-CP) to deal with node classification. More specifically, we design a multi-view attention network to extract abundant information from multiple views in the network and obtain a learning representation for each view. To enable the views to collaborate, we further apply attention mechanism to facilitate the view fusion process. Taking advantage of the maximum parallelism of P system, the process of learning and fusion will be realized in the coupled P system, which greatly improves the computational efficiency. Experiments on real network data sets indicate that our model is effective.

摘要

大多数现有的图嵌入方法用于描述单视图网络并解决网络中的单关系。然而,现实世界是由具有复杂关系的多视图网络组成的,现有的方法已经不能满足人们的需求。为了解决这个问题,我们提出了一种受耦合 P 系统启发的新的多视图注意网络(MVAN-CP),用于处理节点分类。更具体地说,我们设计了一个多视图注意网络,从网络中的多个视图中提取丰富的信息,并为每个视图获取学习表示。为了使视图能够协作,我们进一步应用注意力机制来促进视图融合过程。利用 P 系统的最大并行性,学习和融合过程将在耦合 P 系统中实现,这大大提高了计算效率。在真实网络数据集上的实验表明,我们的模型是有效的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8c42/9049499/ed601ff22d7a/pone.0267565.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验