Boschenriedter Christian, Rubbert Christian, Vach Marius, Caspers Julian
Department of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Düsseldorf, Heinrich-Heine-University Düsseldorf, Moorenstraße 5, 40225, Düsseldorf, Germany.
Clin Neuroradiol. 2025 Aug 18. doi: 10.1007/s00062-025-01554-z.
Selection of appropriate imaging sequences protocols for cranial magnetic resonance imaging (MRI) is crucial to address the medical question and adequately support patient care. Inappropriate protocol selection can compromise diagnostic accuracy, extend scan duration, and increase the risk of misdiagnosis. Typically, radiologists determine scanning protocols based on their expertise, a process that can be time-consuming and subject to variability. Language models offer the potential to streamline this process. This study investigates the capability of bidirectional encoder representations from transformers (BERT)-based models to suggest appropriate MRI protocols based on referral information.A total of 410 anonymized electronic referrals for cranial MRI from a local order-entry system were categorized into nine protocol classes by an experienced neuroradiologist. A locally hosted instance of four different, pre-trained BERT-based classifiers (BERT, ModernBERT, GottBERT, and medBERT.de) were trained to classify protocols based on referral entries, including preliminary diagnoses, prior treatment history, and clinical questions. Each model was additionally fine-tuned for local language on a large dataset of electronic referrals.The model based on medBERT.de with local language fine-tuning was the best-performing model and correctly predicted 81% of all protocols, achieving a macro-F1 score of 0.71, macro-precision and macro-recall values of 0.73 and 0.71, respectively. Moreover, we were able to show that local language fine-tuning led to performance improvements across all models.These results demonstrate the potential of language models to predict MRI protocols, even with limited training data. This approach could accelerate and standardize radiological protocol selection, offering significant benefits for clinical workflows.
为头颅磁共振成像(MRI)选择合适的成像序列方案对于解决医学问题和充分支持患者护理至关重要。不恰当的方案选择可能会损害诊断准确性、延长扫描时间并增加误诊风险。通常,放射科医生根据自己的专业知识来确定扫描方案,这一过程可能耗时且存在变异性。语言模型为简化这一过程提供了潜力。本研究调查了基于变换器的双向编码器表征(BERT)模型根据转诊信息推荐合适MRI方案的能力。一位经验丰富的神经放射科医生将来自本地医嘱录入系统的410份匿名头颅MRI电子转诊病例分为九个方案类别。对四个不同的、预训练的基于BERT的分类器(BERT、ModernBERT、GottBERT和medBERT.de)在本地托管的实例进行训练,以便根据转诊记录(包括初步诊断、既往治疗史和临床问题)对方案进行分类。每个模型还在一个大型电子转诊数据集上针对本地语言进行了微调。基于medBERT.de并经过本地语言微调的模型是表现最佳的模型,正确预测了所有方案中的81%,宏观F1分数为0.71,宏观精确率和宏观召回率分别为0.73和0.71。此外,我们能够证明本地语言微调使所有模型的性能都得到了提升。这些结果表明,即使训练数据有限,语言模型也有预测MRI方案的潜力。这种方法可以加速并规范放射学方案选择,为临床工作流程带来显著益处。