Suppr超能文献

基于词向量分布传播图网络的少样本学习方法。

Word Embedding Distribution Propagation Graph Network for Few-Shot Learning.

机构信息

College of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China.

出版信息

Sensors (Basel). 2022 Mar 30;22(7):2648. doi: 10.3390/s22072648.

Abstract

Few-shot learning (FSL) is of great significance to the field of machine learning. The ability to learn and generalize using a small number of samples is an obvious distinction between artificial intelligence and humans. In the FSL domain, most graph neural networks (GNNs) focus on transferring labeled sample information to an unlabeled query sample, ignoring the important role of semantic information during the classification process. Our proposed method embeds semantic information of classes into a GNN, creating a word embedding distribution propagation graph network (WPGN) for FSL. We merge the attention mechanism with our backbone network, use the Mahalanobis distance to calculate the similarity of classes, select the Funnel ReLU (FReLU) function as the activation function of the Transform layer, and update the point graph and word embedding distribution graph. In extensive experiments on FSL benchmarks, compared with the baseline model, the accuracy of the WPGN on the 5-way-1/2/5 shot tasks increased by 9.03, 4.56, and 4.15%, respectively.

摘要

少样本学习 (FSL) 在机器学习领域具有重要意义。使用少量样本进行学习和泛化的能力是人工智能与人类的明显区别。在 FSL 领域,大多数图神经网络 (GNN) 专注于将标记样本信息转移到未标记查询样本上,而忽略了分类过程中语义信息的重要作用。我们提出的方法将类别的语义信息嵌入到 GNN 中,为 FSL 创建了一个词嵌入分布传播图网络 (WPGN)。我们将注意力机制与骨干网络融合,使用马氏距离计算类别的相似度,选择 Funnel ReLU (FReLU) 函数作为 Transform 层的激活函数,并更新点图和词嵌入分布图。在 FSL 基准上的广泛实验中,与基线模型相比,WPGN 在 5-way-1/2/5 shot 任务上的准确率分别提高了 9.03%、4.56%和 4.15%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cad6/9002792/9d1264bff66f/sensors-22-02648-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验