Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:30-35. doi: 10.1109/EMBC48229.2022.9871984.
Graph neural networks (GNN) are an emerging framework in the deep learning community. In most GNN applications, the graph topology of data samples is provided in the dataset. Specifically, the graph shift operator (GSO), which could be adjacency, graph Laplacian, or their normalizations, is known a priori. However we often have no knowledge of the grand-truth graph topology underlying real-world datasets. One example of this is to extract subject-invariant features from physiological electroencephalogram (EEG) to predict a cognitive task. Previous methods use electrode sites to represent a node in the graph and connect them in various ways to hand-engineer a GSO e.g., i) each pair of electrode sites is connected to form a complete graph, ii) a specific number of electrode sites are connected to form a k-nearest neighbor graph, iii) each pair of electrode site is connected only if the Euclidean distance is within a heuristic threshold. In this paper, we overcome this limitation by parameterizing the GSO using a multi-head attention mechanism to explore the functional neural connectivity subject to a cognitive task between different electrode sites, and simultaneously learn the unsupervised graph topology in conjunction with the parameters of graph convolutional kernels.
图神经网络 (GNN) 是深度学习领域中的一个新兴框架。在大多数 GNN 应用中,数据样本的图拓扑结构在数据集提供。具体来说,图移位算子 (GSO) ,可以是邻接矩阵、图拉普拉斯算子或它们的归一化形式,是先验已知的。然而,我们通常对真实世界数据集的总体真实图拓扑结构一无所知。一个例子是从生理脑电图 (EEG) 中提取与主体无关的特征来预测认知任务。以前的方法使用电极位置来表示图中的一个节点,并以各种方式连接它们来手工设计 GSO,例如:i)连接每对电极位置以形成完全图,ii)连接特定数量的电极位置以形成 k-最近邻图,iii)只有在欧几里得距离在启发式阈值内时,才连接每对电极位置。在本文中,我们通过使用多头注意力机制来参数化 GSO,以探索在不同电极位置之间执行认知任务时的功能神经连接,同时结合图卷积核的参数学习无监督图拓扑结构,从而克服了这一限制。