Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China.
School of Medical Technology, Beijing Institute of Technology, No.5 Zhongguancun East Road, Beijing, 100050, People's Republic of China.
BMC Med Inform Decis Mak. 2022 Jul 30;22(1):200. doi: 10.1186/s12911-022-01946-y.
BACKGROUND: Given the increasing number of people suffering from tinnitus, the accurate categorization of patients with actionable reports is attractive in assisting clinical decision making. However, this process requires experienced physicians and significant human labor. Natural language processing (NLP) has shown great potential in big data analytics of medical texts; yet, its application to domain-specific analysis of radiology reports is limited. OBJECTIVE: The aim of this study is to propose a novel approach in classifying actionable radiology reports of tinnitus patients using bidirectional encoder representations from transformer BERT-based models and evaluate the benefits of in domain pre-training (IDPT) along with a sequence adaptation strategy. METHODS: A total of 5864 temporal bone computed tomography(CT) reports are labeled by two experienced radiologists as follows: (1) normal findings without notable lesions; (2) notable lesions but uncorrelated to tinnitus; and (3) at least one lesion considered as potential cause of tinnitus. We then constructed a framework consisting of deep learning (DL) neural networks and self-supervised BERT models. A tinnitus domain-specific corpus is used to pre-train the BERT model to further improve its embedding weights. In addition, we conducted an experiment to evaluate multiple groups of max sequence length settings in BERT to reduce the excessive quantity of calculations. After a comprehensive comparison of all metrics, we determined the most promising approach through the performance comparison of F1-scores and AUC values. RESULTS: In the first experiment, the BERT finetune model achieved a more promising result (AUC-0.868, F1-0.760) compared with that of the Word2Vec-based models(AUC-0.767, F1-0.733) on validation data. In the second experiment, the BERT in-domain pre-training model (AUC-0.948, F1-0.841) performed significantly better than the BERT based model(AUC-0.868, F1-0.760). Additionally, in the variants of BERT fine-tuning models, Mengzi achieved the highest AUC of 0.878 (F1-0.764). Finally, we found that the BERT max-sequence-length of 128 tokens achieved an AUC of 0.866 (F1-0.736), which is almost equal to the BERT max-sequence-length of 512 tokens (AUC-0.868,F1-0.760). CONCLUSION: In conclusion, we developed a reliable BERT-based framework for tinnitus diagnosis from Chinese radiology reports, along with a sequence adaptation strategy to reduce computational resources while maintaining accuracy. The findings could provide a reference for NLP development in Chinese radiology reports.
背景:随着越来越多的人患有耳鸣,对有明确治疗方案的报告进行准确分类有助于辅助临床决策。然而,这一过程需要经验丰富的医生和大量的人力。自然语言处理(NLP)在医学文本的大数据分析中显示出巨大的潜力,但它在放射学报告的特定领域分析中的应用有限。
目的:本研究旨在提出一种新方法,使用基于双向编码器表示的转换器 BERT 模型对耳鸣患者的可操作放射学报告进行分类,并评估域内预训练(IDPT)和序列适配策略的益处。
方法:总共 5864 份颞骨计算机断层扫描(CT)报告由两位经验丰富的放射科医生标记如下:(1)无明显病变的正常发现;(2)有明显病变但与耳鸣无关;(3)至少有一处病变被认为是耳鸣的潜在原因。然后,我们构建了一个由深度学习(DL)神经网络和自监督 BERT 模型组成的框架。使用耳鸣领域特定语料库对 BERT 模型进行预训练,以进一步提高其嵌入权重。此外,我们还进行了一项实验,以评估 BERT 中多个最大序列长度设置,以减少计算量过大的问题。在对所有指标进行全面比较后,我们通过比较 F1 分数和 AUC 值来确定最有前途的方法。
结果:在第一个实验中,BERT 微调模型在验证数据上的表现优于基于 Word2Vec 的模型(AUC-0.767,F1-0.733)(AUC-0.868,F1-0.760)。在第二个实验中,BERT 领域内预训练模型(AUC-0.948,F1-0.841)的表现明显优于 BERT 模型(AUC-0.868,F1-0.760)。此外,在 BERT 微调模型的变体中,Mengzi 实现了最高 AUC 为 0.878(F1-0.764)。最后,我们发现 BERT 的最大序列长度为 128 个标记符,其 AUC 为 0.866(F1-0.736),几乎与 BERT 的最大序列长度为 512 个标记符(AUC-0.868,F1-0.760)相同。
结论:总之,我们开发了一种可靠的基于 BERT 的耳鸣诊断框架,结合序列适配策略,可以在保持准确性的同时减少计算资源。研究结果可为中文放射学报告的 NLP 开发提供参考。
BMC Med Inform Decis Mak. 2021-9-11
Radiol Artif Intell. 2022-6-15
Laryngoscope. 2025-9
Radiol Artif Intell. 2021-11-24
Radiol Clin North Am. 2021-11
BMC Med Inform Decis Mak. 2021-9-11
Radiographics. 2021
AJNR Am J Neuroradiol. 2021-10
BMC Med Inform Decis Mak. 2021-7-30
J Biomed Inform. 2021-1