Suppr超能文献

基于联合特征注意力和全共享多任务学习的生物医学命名实体识别。

Biomedical named entity recognition with the combined feature attention and fully-shared multi-task learning.

机构信息

Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan.

Department of Computer Science and Information Engineering, Asia University, Taichung, Taiwan.

出版信息

BMC Bioinformatics. 2022 Nov 3;23(1):458. doi: 10.1186/s12859-022-04994-3.

Abstract

BACKGROUND

Biomedical named entity recognition (BioNER) is a basic and important task for biomedical text mining with the purpose of automatically recognizing and classifying biomedical entities. The performance of BioNER systems directly impacts downstream applications. Recently, deep neural networks, especially pre-trained language models, have made great progress for BioNER. However, because of the lack of high-quality and large-scale annotated data and relevant external knowledge, the capability of the BioNER system remains limited.

RESULTS

In this paper, we propose a novel fully-shared multi-task learning model based on the pre-trained language model in biomedical domain, namely BioBERT, with a new attention module to integrate the auto-processed syntactic information for the BioNER task. We have conducted numerous experiments on seven benchmark BioNER datasets. The proposed best multi-task model obtains F1 score improvements of 1.03% on BC2GM, 0.91% on NCBI-disease, 0.81% on Linnaeus, 1.26% on JNLPBA, 0.82% on BC5CDR-Chemical, 0.87% on BC5CDR-Disease, and 1.10% on Species-800 compared to the single-task BioBERT model.

CONCLUSION

The results demonstrate our model outperforms previous studies on all datasets. Further analysis and case studies are also provided to prove the importance of the proposed attention module and fully-shared multi-task learning method used in our model.

摘要

背景

生物医学命名实体识别(BioNER)是生物医学文本挖掘的基本且重要任务,旨在自动识别和分类生物医学实体。BioNER 系统的性能直接影响下游应用。最近,深度学习神经网络,尤其是预训练语言模型,在 BioNER 方面取得了巨大进展。然而,由于缺乏高质量和大规模的标注数据以及相关的外部知识,BioNER 系统的能力仍然有限。

结果

在本文中,我们提出了一种新颖的基于生物医学领域预训练语言模型 BioBERT 的全共享多任务学习模型,该模型具有新的注意力模块,可将自动处理的句法信息集成到 BioNER 任务中。我们在七个基准 BioNER 数据集上进行了大量实验。所提出的最佳多任务模型在 BC2GM、NCBI-disease、Linnaeus、JNLPBA、BC5CDR-Chemical、BC5CDR-Disease 和 Species-800 数据集上分别比单任务 BioBERT 模型提高了 1.03%、0.91%、0.81%、1.26%、0.82%、0.87%和 1.10%的 F1 得分。

结论

结果表明,我们的模型在所有数据集上的表现均优于以往的研究。进一步的分析和案例研究也证明了我们模型中提出的注意力模块和全共享多任务学习方法的重要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93e7/9632084/b91dc37559b1/12859_2022_4994_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验