Yan An, McAuley Julian, Lu Xing, Du Jiang, Chang Eric Y, Gentili Amilcare, Hsu Chun-Nan
University of California, San Diego, 9500 Gilman Dr, La Jolla, CA 92093-0608 (A.Y., J.M., X.L., J.D., E.Y.C., A.G., C.N.H.); and Veterans Affairs San Diego Healthcare System, San Diego, Calif (E.Y.C., A.G.).
Radiol Artif Intell. 2022 Jun 15;4(4):e210258. doi: 10.1148/ryai.210258. eCollection 2022 Jul.
To investigate if tailoring a transformer-based language model to radiology is beneficial for radiology natural language processing (NLP) applications.
This retrospective study presents a family of bidirectional encoder representations from transformers (BERT)-based language models adapted for radiology, named RadBERT. Transformers were pretrained with either 2.16 or 4.42 million radiology reports from U.S. Department of Veterans Affairs health care systems nationwide on top of four different initializations (BERT-base, Clinical-BERT, robustly optimized BERT pretraining approach [RoBERTa], and BioMed-RoBERTa) to create six variants of RadBERT. Each variant was fine-tuned for three representative NLP tasks in radiology: abnormal sentence classification: models classified sentences in radiology reports as reporting abnormal or normal findings; report coding: models assigned a diagnostic code to a given radiology report for five coding systems; and report summarization: given the findings section of a radiology report, models selected key sentences that summarized the findings. Model performance was compared by bootstrap resampling with five intensively studied transformer language models as baselines: BERT-base, BioBERT, Clinical-BERT, BlueBERT, and BioMed-RoBERTa.
For abnormal sentence classification, all models performed well (accuracies above 97.5 and F1 scores above 95.0). RadBERT variants achieved significantly higher scores than corresponding baselines when given only 10% or less of 12 458 annotated training sentences. For report coding, all variants outperformed baselines significantly for all five coding systems. The variant RadBERT-BioMed-RoBERTa performed the best among all models for report summarization, achieving a Recall-Oriented Understudy for Gisting Evaluation-1 score of 16.18 compared with 15.27 by the corresponding baseline (BioMed-RoBERTa, < .004).
Transformer-based language models tailored to radiology had improved performance of radiology NLP tasks compared with baseline transformer language models. Translation, Unsupervised Learning, Transfer Learning, Neural Networks, Informatics © RSNA, 2022See also commentary by Wiggins and Tejani in this issue.
研究将基于Transformer的语言模型定制用于放射学是否有利于放射学自然语言处理(NLP)应用。
这项回顾性研究展示了一系列适用于放射学的基于Transformer的双向编码器表征(BERT)语言模型,名为RadBERT。Transformer在来自美国退伍军人事务部医疗系统的216万或442万份放射学报告上进行预训练,基于四种不同的初始化(BERT-base、Clinical-BERT、稳健优化的BERT预训练方法[RoBERTa]和BioMed-RoBERTa),以创建六种RadBERT变体。每个变体针对放射学中的三个代表性NLP任务进行微调:异常句子分类:模型将放射学报告中的句子分类为报告异常或正常发现;报告编码:模型为给定的放射学报告为五个编码系统分配诊断代码;以及报告摘要:给定放射学报告的发现部分,模型选择总结发现的关键句子。通过自抽样重采样将模型性能与五个经过深入研究的Transformer语言模型作为基线进行比较:BERT-base、BioBERT、Clinical-BERT、BlueBERT和BioMed-RoBERTa。
对于异常句子分类,所有模型表现良好(准确率高于97.5,F1分数高于95.0)。当仅给出12458个带注释训练句子的10%或更少时,RadBERT变体的得分显著高于相应基线。对于报告编码,所有变体在所有五个编码系统上均显著优于基线。RadBERT-BioMed-RoBERTa变体在所有模型中报告摘要表现最佳,召回率导向的摘要评估-1(ROUGE-1)分数达到16.18,而相应基线(BioMed-RoBERTa)为15.27(P <.004)。
与基线Transformer语言模型相比,针对放射学定制的基于Transformer的语言模型在放射学NLP任务中具有更高的性能。翻译、无监督学习、迁移学习、神经网络、信息学 © RSNA,2022另见本期Wiggins和Tejani的评论。