Suppr超能文献

使用语料库和知识库对数据进行层次表示的词向量微调,用于各种机器学习应用。

Fine-Tuning Word Embeddings for Hierarchical Representation of Data Using a Corpus and a Knowledge Base for Various Machine Learning Applications.

机构信息

Department of Computer Science, College of Computer, Qassim University, Buraydah, Saudi Arabia.

Department of Computer Science, University of Liverpool, Liverpool, UK.

出版信息

Comput Math Methods Med. 2021 Nov 16;2021:9761163. doi: 10.1155/2021/9761163. eCollection 2021.

Abstract

Word embedding models have recently shown some capability to encode hierarchical information that exists in textual data. However, such models do not explicitly encode the hierarchical structure that exists among words. In this work, we propose a method to learn hierarchical word embeddings (HWEs) in a specific order to encode the hierarchical information of a knowledge base (KB) in a vector space. To learn the word embeddings, our proposed method considers not only the hypernym relations that exist between words in a KB but also contextual information in a text corpus. The experimental results on various applications, such as supervised and unsupervised hypernymy detection, graded lexical entailment prediction, hierarchical path prediction, and word reconstruction tasks, show the ability of the proposed method to encode the hierarchy. Moreover, the proposed method outperforms previously proposed methods for learning nonspecialised, hypernym-specific, and hierarchical word embeddings on multiple benchmarks.

摘要

词嵌入模型最近已经显示出了一些编码文本数据中存在的层次信息的能力。然而,这样的模型并没有显式地编码单词之间存在的层次结构。在这项工作中,我们提出了一种方法,以便按照特定的顺序学习层次化的词嵌入(HWE),从而在向量空间中对知识库(KB)的层次信息进行编码。为了学习词嵌入,我们的方法不仅考虑了 KB 中单词之间存在的上下位关系,还考虑了文本语料库中的上下文信息。在各种应用程序上的实验结果,如有监督和无监督的上下位词检测、分级词汇蕴涵预测、层次路径预测和单词重构任务,展示了该方法编码层次结构的能力。此外,在多个基准测试中,该方法在学习非专业、特定于上位词的和层次化的词嵌入方面优于之前提出的方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1667/8610673/da43f950b989/CMMM2021-9761163.001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验