Suppr超能文献

嵌入学习

Embedding Learning.

作者信息

Dai Ben, Shen Xiaotong, Wang Junhui

机构信息

School of Statistics, University of Minnesota, Minneapolis, MN.

School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.

出版信息

J Am Stat Assoc. 2022;117(537):307-319. doi: 10.1080/01621459.2020.1775614. Epub 2020 Jul 20.

Abstract

Numerical embedding has become one standard technique for processing and analyzing unstructured data that cannot be expressed in a predefined fashion. It stores the main characteristics of data by mapping it onto a numerical vector. An embedding is often unsupervised and constructed by transfer learning from large-scale unannotated data. Given an embedding, a downstream learning method, referred to as a two-stage method, is applicable to unstructured data. In this article, we introduce a novel framework of embedding learning to deliver a higher learning accuracy than the two-stage method while identifying an optimal learning-adaptive embedding. In particular, we propose a concept of -minimal sufficient learning-adaptive embeddings, based on which we seek an optimal one to maximize the learning accuracy subject to an embedding constraint. Moreover, when specializing the general framework to classification, we derive a graph embedding classifier based on a hyperlink tensor representing multiple hypergraphs, directed or undirected, characterizing multi-way relations of unstructured data. Numerically, we design algorithms based on blockwise coordinate descent and projected gradient descent to implement linear and feed-forward neural network classifiers, respectively. Theoretically, we establish a learning theory to quantify the generalization error of the proposed method. Moreover, we show, in linear regression, that the one-hot encoder is more preferable among two-stage methods, yet its dimension restriction hinders its predictive performance. For a graph embedding classifier, the generalization error matches up to the standard fast rate or the parametric rate for linear or nonlinear classification. Finally, we demonstrate the utility of the classifiers on two benchmarks in grammatical classification and sentiment analysis. Supplementary materials for this article are available online.

摘要

数值嵌入已成为处理和分析无法以预定义方式表达的非结构化数据的一种标准技术。它通过将数据映射到数值向量来存储数据的主要特征。嵌入通常是无监督的,并且通过从大规模未标注数据进行迁移学习来构建。给定一个嵌入,一种被称为两阶段方法的下游学习方法适用于非结构化数据。在本文中,我们介绍了一种新颖的嵌入学习框架,在识别最优学习自适应嵌入的同时,能提供比两阶段方法更高的学习精度。特别地,我们提出了 - 最小充分学习自适应嵌入的概念,基于此我们寻求一个最优嵌入,以在嵌入约束下最大化学习精度。此外,当将通用框架专门用于分类时,我们基于表示多个超图(有向或无向)的超链接张量导出了一个图嵌入分类器,该超图表征了非结构化数据的多向关系。在数值上,我们分别基于块坐标下降和投影梯度下降设计算法来实现线性和前馈神经网络分类器。在理论上,我们建立了一个学习理论来量化所提出方法的泛化误差。此外,我们表明,在线性回归中,独热编码器在两阶段方法中更可取,但其维度限制阻碍了其预测性能。对于图嵌入分类器,泛化误差与线性或非线性分类的标准快速率或参数率相匹配。最后,我们在语法分类和情感分析的两个基准测试中展示了分类器的效用。本文的补充材料可在线获取。

相似文献

1
Embedding Learning.嵌入学习
J Am Stat Assoc. 2022;117(537):307-319. doi: 10.1080/01621459.2020.1775614. Epub 2020 Jul 20.
3
Interactive Dual Attention Network for Text Sentiment Classification.用于文本情感分类的交互式双注意力网络。
Comput Intell Neurosci. 2020 Nov 3;2020:8858717. doi: 10.1155/2020/8858717. eCollection 2020.
7
Unsupervised Graph Embedding via Adaptive Graph Learning.通过自适应图学习实现无监督图嵌入
IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):5329-5336. doi: 10.1109/TPAMI.2022.3202158. Epub 2023 Mar 7.
9
Learning Flexible Graph-Based Semi-Supervised Embedding.学习基于图的灵活半监督嵌入。
IEEE Trans Cybern. 2016 Jan;46(1):206-18. doi: 10.1109/TCYB.2015.2399456. Epub 2015 Feb 26.

引用本文的文献

3
Coupled generation.耦合产生
J Am Stat Assoc. 2022;117(539):1243-1253. doi: 10.1080/01621459.2020.1844719. Epub 2021 Jan 4.

本文引用的文献

1
Error bounds for approximations with deep ReLU networks.深度 ReLU 网络逼近的误差界。
Neural Netw. 2017 Oct;94:103-114. doi: 10.1016/j.neunet.2017.07.002. Epub 2017 Jul 13.
4
The BioGRID Interaction Database: 2011 update.生物网格相互作用数据库:2011年更新版
Nucleic Acids Res. 2011 Jan;39(Database issue):D698-704. doi: 10.1093/nar/gkq1116. Epub 2010 Nov 11.
5
Nonlinear dimensionality reduction by locally linear embedding.通过局部线性嵌入进行非线性降维
Science. 2000 Dec 22;290(5500):2323-6. doi: 10.1126/science.290.5500.2323.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验