Suppr超能文献

多尺度基本结构的神经提取用于网络分解。

Neural extraction of multiscale essential structure for network dismantling.

机构信息

School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China.

School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China.

出版信息

Neural Netw. 2022 Oct;154:99-108. doi: 10.1016/j.neunet.2022.07.015. Epub 2022 Jul 16.

Abstract

Diverse real world systems can be abstracted as complex networks consisting of nodes and edges as functional components. Percolation theory has shown that the failure of a few of nodes could lead to the collapse of a whole network, which brings up the network dismantling problem: How to select the least number of nodes to decompose a network into disconnected components each smaller than a predefined threshold? For its NP-hardness, many heuristic approaches have been proposed to measure and rank each node according to its importance to network structural stability. However, these measures are from a uniscale viewpoint by regarding one complex network as a flatted topology. In this article, we argue that nodes' structural importance can be measured in different scales of network topologies. Built upon recent deep learning techniques, we propose a self-supervised learning based network dismantling framework (NEES), which can hierarchically merge some compact substructures to convert a network into a coarser one with fewer nodes and edges. During the merging process, we design neural models to extract essential structures and utilize self-attention mechanisms to learn nodes' importance hierarchy in each scale. Experiments on real world networks and synthetic model networks show that the proposed NEES outperforms the state-of-the-art schemes in most cases in terms of removing the least number of target nodes to dismantle a network. The dismantling effectiveness of our neural extraction framework also highlights the emerging role of multi-scale essential structures.

摘要

多样化的真实系统可以抽象为由节点和边作为功能组件组成的复杂网络。渗流理论表明,少数节点的失效可能导致整个网络的崩溃,这就提出了网络拆解问题:如何选择最少数量的节点将网络分解为不连通的组件,每个组件都小于预定义的阈值?由于其 NP 难问题,已经提出了许多启发式方法来根据节点对网络结构稳定性的重要性来测量和排名。然而,这些方法从单一尺度的角度来看待一个复杂网络,将其视为扁平化的拓扑结构。在本文中,我们认为节点的结构重要性可以在不同的网络拓扑尺度上进行测量。基于最近的深度学习技术,我们提出了一种基于自我监督学习的网络拆解框架(NEES),它可以分层合并一些紧凑的子结构,将网络转换为具有更少节点和边的更粗糙的网络。在合并过程中,我们设计了神经模型来提取基本结构,并利用自注意力机制学习每个尺度中节点的重要性层次。在真实网络和合成模型网络上的实验表明,在所提出的 NEES 中,在大多数情况下,与最先进的方案相比,它可以移除最少数量的目标节点来拆解网络。我们的神经提取框架的拆解效果也突出了多尺度基本结构的新兴作用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验