Rohanian Omid, Nouriborji Mohammadmahdi, Jauncey Hannah, Kouchaki Samaneh, Nooralahzadeh Farhad, Clifton Lei, Merson Laura, Clifton David A
Department of Engineering Science, University of Oxford, Oxford, UK.
NLPie Research, Oxford, UK.
Nat Lang Eng. 2024 Sep;30(5):887-914. doi: 10.1017/S1351324923000542. Epub 2024 Jan 12.
Specialised pre-trained language models are becoming more frequent in Natural language Processing (NLP) since they can potentially outperform models trained on generic texts. BioBERT (Sanh et al., Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. , 2019) and BioClinicalBERT (Alsentzer et al., Publicly available clinical bert embeddings. In , pp. 72-78, 2019) are two examples of such models that have shown promise in medical NLP tasks. Many of these models are overparametrised and resource-intensive, but thanks to techniques like knowledge distillation, it is possible to create smaller versions that perform almost as well as their larger counterparts. In this work, we specifically focus on development of compact language models for processing clinical texts (i.e. progress notes, discharge summaries, etc). We developed a number of efficient lightweight clinical transformers using knowledge distillation and continual learning, with the number of parameters ranging from million to million. These models performed comparably to larger models such as BioBERT and ClinicalBioBERT and significantly outperformed other compact models trained on general or biomedical data. Our extensive evaluation was done across several standard datasets and covered a wide range of clinical text-mining tasks, including natural language inference, relation extraction, named entity recognition and sequence classification. To our knowledge, this is the first comprehensive study specifically focused on creating efficient and compact transformers for clinical NLP tasks. The models and code used in this study can be found on our Huggingface profile at https://huggingface.co/nlpie and Github page at https://github.com/nlpie-research/Lightweight-Clinical-Transformers, respectively, promoting reproducibility of our results.
专门的预训练语言模型在自然语言处理(NLP)中越来越常见,因为它们有可能优于基于通用文本训练的模型。BioBERT(桑赫等人,Distilbert,BERT的精简版:更小、更快、更便宜、更轻便。,2019年)和BioClinicalBERT(阿尔森策等人,公开可用的临床BERT嵌入。在,第72 - 78页,2019年)就是这类模型的两个例子,它们在医学NLP任务中显示出了潜力。这些模型中的许多都参数过多且资源密集,但由于知识蒸馏等技术,可以创建几乎与较大版本性能相同的较小版本。在这项工作中,我们特别专注于开发用于处理临床文本(即病程记录、出院小结等)的紧凑语言模型。我们使用知识蒸馏和持续学习开发了一些高效的轻量级临床变压器,参数数量从百万到百万不等。这些模型的表现与BioBERT和ClinicalBioBERT等较大模型相当,并且明显优于在通用或生物医学数据上训练的其他紧凑模型。我们在几个标准数据集上进行了广泛评估,涵盖了广泛的临床文本挖掘任务,包括自然语言推理、关系提取、命名实体识别和序列分类。据我们所知,这是第一项专门专注于为临床NLP任务创建高效紧凑变压器的综合研究。本研究中使用的模型和代码分别可以在我们的Huggingface页面https://huggingface.co/nlpie和Github页面https://github.com/nlpie-research/Lightweight-Clinical-Transformers上找到,这有助于我们结果的可重复性。