Suppr超能文献

将表示学习和多头注意力机制结合起来,以提高生物医学跨句子 n 元关系抽取。

Incorporating representation learning and multihead attention to improve biomedical cross-sentence n-ary relation extraction.

机构信息

School of Computer Science and Technology, Dalian University of Technology, Dalian, China.

出版信息

BMC Bioinformatics. 2020 Jul 16;21(1):312. doi: 10.1186/s12859-020-03629-9.

Abstract

BACKGROUND

Most biomedical information extraction focuses on binary relations within single sentences. However, extracting n-ary relations that span multiple sentences is in huge demand. At present, in the cross-sentence n-ary relation extraction task, the mainstream method not only relies heavily on syntactic parsing but also ignores prior knowledge.

RESULTS

In this paper, we propose a novel cross-sentence n-ary relation extraction method that utilizes the multihead attention and knowledge representation that is learned from the knowledge graph. Our model is built on self-attention, which can directly capture the relations between two words regardless of their syntactic relation. In addition, our method makes use of entity and relation information from the knowledge base to impose assistance while predicting the relation. Experiments on n-ary relation extraction show that combining context and knowledge representations can significantly improve the n-ary relation extraction performance. Meanwhile, we achieve comparable results with state-of-the-art methods.

CONCLUSIONS

We explored a novel method for cross-sentence n-ary relation extraction. Unlike previous approaches, our methods operate directly on the sequence and learn how to model the internal structures of sentences. In addition, we introduce the knowledge representations learned from the knowledge graph into the cross-sentence n-ary relation extraction. Experiments based on knowledge representation learning show that entities and relations can be extracted in the knowledge graph, and coding this knowledge can provide consistent benefits.

摘要

背景

大多数生物医学信息提取都集中在单句内的二元关系上。然而,提取跨越多个句子的多元关系的需求非常大。目前,在跨句多元关系提取任务中,主流方法不仅严重依赖于句法解析,而且忽略了先验知识。

结果

在本文中,我们提出了一种新颖的跨句多元关系提取方法,该方法利用了多头注意力和从知识图中学习到的知识表示。我们的模型建立在自注意力之上,它可以直接捕捉两个词之间的关系,而不考虑它们的句法关系。此外,我们的方法利用知识库中的实体和关系信息在预测关系时提供帮助。多元关系提取实验表明,结合上下文和知识表示可以显著提高多元关系提取的性能。同时,我们的方法取得了与最先进方法相当的结果。

结论

我们探索了一种新的跨句多元关系提取方法。与以前的方法不同,我们的方法直接在序列上进行操作,并学习如何对句子的内部结构进行建模。此外,我们将从知识图中学习到的知识表示引入到跨句多元关系提取中。基于知识表示学习的实验表明,可以从知识图中提取实体和关系,并且对这种知识进行编码可以提供一致的好处。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/924e/7364499/bce2258f7786/12859_2020_3629_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验