Department of Computer Science and Engineering, Korea University, Seoul 02841, Korea.
Clova AI Research, Naver Corp, Seong-Nam 13561, Korea.
Bioinformatics. 2020 Feb 15;36(4):1234-1240. doi: 10.1093/bioinformatics/btz682.
Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora.
We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts.
We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert.
随着生物医学文献数量的快速增长,生物医学文本挖掘变得越来越重要。随着自然语言处理 (NLP) 的进步,从生物医学文献中提取有价值的信息在研究人员中越来越受欢迎,深度学习也推动了有效的生物医学文本挖掘模型的发展。然而,由于词汇分布从一般领域语料库转移到生物医学语料库,直接将 NLP 的进展应用于生物医学文本挖掘通常会产生不理想的结果。在本文中,我们研究了如何将最近引入的预训练语言模型 BERT 应用于生物医学语料库。
我们引入了 BioBERT(基于 Transformer 的用于生物医学文本挖掘的双向编码器表示),这是一种在大规模生物医学语料库上预训练的特定于领域的语言表示模型。在生物医学语料库上进行预训练时,BioBERT 的几乎所有任务架构都与 BERT 相同,在各种生物医学文本挖掘任务中都大大优于 BERT 和以前的最先进模型。虽然 BERT 的性能可与以前的最先进模型相媲美,但 BioBERT 在以下三个代表性的生物医学文本挖掘任务上的性能明显优于它们:生物医学命名实体识别(F1 得分提高 0.62%)、生物医学关系抽取(F1 得分提高 2.80%)和生物医学问答(MRR 提高 12.24%)。我们的分析结果表明,在生物医学语料库上预训练 BERT 有助于它理解复杂的生物医学文本。
我们在 https://github.com/naver/biobert-pretrained 上免费提供 BioBERT 的预训练权重,在 https://github.com/dmis-lab/biobert 上提供 BioBERT 的微调源代码。