Suppr超能文献

用于自然语言处理任务的无监督多义语言模型。

Unsupervised multi-sense language models for natural language processing tasks.

作者信息

Roh Jihyeon, Park Sungjin, Kim Bo-Kyeong, Oh Sang-Hoon, Lee Soo-Young

机构信息

School of Electrical Engineering and Institute for Artificial Intelligence, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea.

Information & Electronics Research Institute, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea.

出版信息

Neural Netw. 2021 Oct;142:397-409. doi: 10.1016/j.neunet.2021.05.023. Epub 2021 May 25.

Abstract

Existing language models (LMs) represent each word with only a single representation, which is unsuitable for processing words with multiple meanings. This issue has often been compounded by the lack of availability of large-scale data annotated with word meanings. In this paper, we propose a sense-aware framework that can process multi-sense word information without relying on annotated data. In contrast to the existing multi-sense representation models, which handle information in a restricted context, our framework provides context representations encoded without ignoring word order information or long-term dependency. The proposed framework consists of a context representation stage to encode the variable-size context, a sense-labeling stage that involves unsupervised clustering to infer a probable sense for a word in each context, and a multi-sense LM (MSLM) learning stage to learn the multi-sense representations. Particularly for the evaluation of MSLMs with different vocabulary sizes, we propose a new metric, i.e., unigram-normalized perplexity (PPLu), which is also understood as the negated mutual information between a word and its context information. Additionally, there is a theoretical verification of PPLu on the change of vocabulary size. Also, we adopt a method of estimating the number of senses, which does not require further hyperparameter search for an LM performance. For the LMs in our framework, both unidirectional and bidirectional architectures based on long short-term memory (LSTM) and Transformers are adopted. We conduct comprehensive experiments on three language modeling datasets to perform quantitative and qualitative comparisons of various LMs. Our MSLM outperforms single-sense LMs (SSLMs) with the same network architecture and parameters. It also shows better performance on several downstream natural language processing tasks in the General Language Understanding Evaluation (GLUE) and SuperGLUE benchmarks.

摘要

现有的语言模型(LMs)只用单一表示来呈现每个单词,这不适用于处理具有多种含义的单词。由于缺乏带有词义注释的大规模数据,这个问题常常变得更加复杂。在本文中,我们提出了一个词义感知框架,它可以在不依赖注释数据的情况下处理多义词信息。与现有的在受限上下文中处理信息的多义表示模型不同,我们的框架提供了在不忽略词序信息或长期依赖的情况下编码的上下文表示。所提出的框架由一个用于对可变大小上下文进行编码的上下文表示阶段、一个涉及无监督聚类以推断每个上下文中单词可能含义的词义标注阶段以及一个用于学习多义表示的多义语言模型(MSLM)学习阶段组成。特别针对具有不同词汇量的MSLMs的评估,我们提出了一种新的度量标准,即单字归一化困惑度(PPLu),它也被理解为一个单词与其上下文信息之间的负互信息。此外,对PPLu随词汇量变化进行了理论验证。而且,我们采用了一种估计词义数量的方法,该方法不需要为语言模型性能进行进一步的超参数搜索。对于我们框架中的语言模型,采用了基于长短期记忆(LSTM)和Transformer的单向和双向架构。我们在三个语言建模数据集上进行了全面实验,以对各种语言模型进行定量和定性比较。我们的MSLM在相同网络架构和参数下优于单义语言模型(SSLMs)。它在通用语言理解评估(GLUE)和超级GLUE基准测试中的几个下游自然语言处理任务上也表现出更好的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验