School of Information and Communication, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China.
School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China.
Comput Intell Neurosci. 2019 Jul 14;2019:6789520. doi: 10.1155/2019/6789520. eCollection 2019.
Relation extraction is the underlying critical task of textual understanding. However, the existing methods currently have defects in instance selection and lack background knowledge for entity recognition. In this paper, we propose a knowledge-based attention model, which can make full use of supervised information from a knowledge base, to select an entity. We also design a method of dual convolutional neural networks (CNNs) considering the word embedding of each word is restricted by using a single training tool. The proposed model combines a CNN with an attention mechanism. The model inserts the word embedding and supervised information from the knowledge base into the CNN, performs convolution and pooling, and combines the knowledge base and CNN in the full connection layer. Based on these processes, the model not only obtains better entity representations but also improves the performance of relation extraction with the help of rich background knowledge. The experimental results demonstrate that the proposed model achieves competitive performance.
关系抽取是文本理解的基础关键任务。然而,现有的方法在实例选择方面存在缺陷,并且在实体识别方面缺乏背景知识。在本文中,我们提出了一种基于知识的注意力模型,该模型可以充分利用知识库中的监督信息来选择实体。我们还设计了一种考虑到单个训练工具限制每个单词的词嵌入的双卷积神经网络 (CNN) 方法。所提出的模型将 CNN 与注意力机制相结合。该模型将词嵌入和知识库中的监督信息插入到 CNN 中,进行卷积和池化,并在全连接层中结合知识库和 CNN。基于这些过程,该模型不仅获得了更好的实体表示,而且借助丰富的背景知识提高了关系抽取的性能。实验结果表明,所提出的模型具有竞争力的性能。