Dai Ben, Shen Xiaotong, Wang Junhui
School of Statistics, University of Minnesota, Minneapolis, MN.
School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
J Am Stat Assoc. 2022;117(537):307-319. doi: 10.1080/01621459.2020.1775614. Epub 2020 Jul 20.
Numerical embedding has become one standard technique for processing and analyzing unstructured data that cannot be expressed in a predefined fashion. It stores the main characteristics of data by mapping it onto a numerical vector. An embedding is often unsupervised and constructed by transfer learning from large-scale unannotated data. Given an embedding, a downstream learning method, referred to as a two-stage method, is applicable to unstructured data. In this article, we introduce a novel framework of embedding learning to deliver a higher learning accuracy than the two-stage method while identifying an optimal learning-adaptive embedding. In particular, we propose a concept of -minimal sufficient learning-adaptive embeddings, based on which we seek an optimal one to maximize the learning accuracy subject to an embedding constraint. Moreover, when specializing the general framework to classification, we derive a graph embedding classifier based on a hyperlink tensor representing multiple hypergraphs, directed or undirected, characterizing multi-way relations of unstructured data. Numerically, we design algorithms based on blockwise coordinate descent and projected gradient descent to implement linear and feed-forward neural network classifiers, respectively. Theoretically, we establish a learning theory to quantify the generalization error of the proposed method. Moreover, we show, in linear regression, that the one-hot encoder is more preferable among two-stage methods, yet its dimension restriction hinders its predictive performance. For a graph embedding classifier, the generalization error matches up to the standard fast rate or the parametric rate for linear or nonlinear classification. Finally, we demonstrate the utility of the classifiers on two benchmarks in grammatical classification and sentiment analysis. Supplementary materials for this article are available online.
数值嵌入已成为处理和分析无法以预定义方式表达的非结构化数据的一种标准技术。它通过将数据映射到数值向量来存储数据的主要特征。嵌入通常是无监督的,并且通过从大规模未标注数据进行迁移学习来构建。给定一个嵌入,一种被称为两阶段方法的下游学习方法适用于非结构化数据。在本文中,我们介绍了一种新颖的嵌入学习框架,在识别最优学习自适应嵌入的同时,能提供比两阶段方法更高的学习精度。特别地,我们提出了 - 最小充分学习自适应嵌入的概念,基于此我们寻求一个最优嵌入,以在嵌入约束下最大化学习精度。此外,当将通用框架专门用于分类时,我们基于表示多个超图(有向或无向)的超链接张量导出了一个图嵌入分类器,该超图表征了非结构化数据的多向关系。在数值上,我们分别基于块坐标下降和投影梯度下降设计算法来实现线性和前馈神经网络分类器。在理论上,我们建立了一个学习理论来量化所提出方法的泛化误差。此外,我们表明,在线性回归中,独热编码器在两阶段方法中更可取,但其维度限制阻碍了其预测性能。对于图嵌入分类器,泛化误差与线性或非线性分类的标准快速率或参数率相匹配。最后,我们在语法分类和情感分析的两个基准测试中展示了分类器的效用。本文的补充材料可在线获取。