Suppr超能文献

一种用于检测放射学报告中否定表达的大语言模型。

A Large Language Model to Detect Negated Expressions in Radiology Reports.

作者信息

Su Yvonne, Babore Yonatan B, Kahn Charles E

机构信息

Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 3400 Spruce Street, Philadelphia, 19104, PA, USA.

Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA.

出版信息

J Imaging Inform Med. 2025 Jun;38(3):1297-1303. doi: 10.1007/s10278-024-01274-9. Epub 2024 Sep 25.

Abstract

Natural language processing (NLP) is crucial to extract information accurately from unstructured text to provide insights for clinical decision-making, quality improvement, and medical research. This study compared the performance of a rule-based NLP system and a medical-domain transformer-based model to detect negated concepts in radiology reports. Using a corpus of 984 de-identified radiology reports from a large U.S.-based academic health system (1000 consecutive reports, excluding 16 duplicates), the investigators compared the rule-based medspaCy system and the Clinical Assertion and Negation Classification Bidirectional Encoder Representations from Transformers (CAN-BERT) system to detect negated expressions of terms from RadLex, the Unified Medical Language System Metathesaurus, and the Radiology Gamuts Ontology. Power analysis determined a sample size of 382 terms to achieve α = 0.05 and β = 0.8 for McNemar's test; based on an estimate of 15% negated terms, 2800 randomly selected terms were annotated manually as negated or not negated. Precision, recall, and F1 of the two models were compared using McNemar's test. Of the 2800 terms, 387 (13.8%) were negated. For negation detection, medspaCy attained a recall of 0.795, precision of 0.356, and F1 of 0.492. CAN-BERT achieved a recall of 0.785, precision of 0.768, and F1 of 0.777. Although recall was not significantly different, CAN-BERT had significantly better precision (χ2 = 304.64; p < 0.001). The transformer-based CAN-BERT model detected negated terms in radiology reports with high precision and recall; its precision significantly exceeded that of the rule-based medspaCy system. Use of this system will improve data extraction from textual reports to support information retrieval, AI model training, and discovery of causal relationships.

摘要

自然语言处理(NLP)对于从非结构化文本中准确提取信息以提供临床决策、质量改进和医学研究的见解至关重要。本研究比较了基于规则的NLP系统和基于医学领域变压器的模型在放射学报告中检测否定概念的性能。使用来自美国一家大型学术健康系统的984份去识别化放射学报告语料库(1000份连续报告,排除16份重复报告),研究人员比较了基于规则的medspaCy系统和来自变压器的临床断言与否定分类双向编码器表示(CAN-BERT)系统,以检测来自RadLex、统一医学语言系统叙词表和放射学色域本体中术语的否定表达。功效分析确定样本量为382个术语,以实现McNemar检验的α = 0.05和β = 0.8;基于对15%否定术语的估计,2800个随机选择的术语被手动标注为否定或非否定。使用McNemar检验比较了两个模型的精确率、召回率和F1值。在2800个术语中,387个(13.8%)被否定。对于否定检测,medspaCy的召回率为0.795,精确率为0.356,F1值为0.492。CAN-BERT的召回率为0.785,精确率为0.768,F1值为0.777。虽然召回率没有显著差异,但CAN-BERT的精确率显著更高(χ2 = 304.64;p < 0.001)。基于变压器的CAN-BERT模型在放射学报告中检测否定术语时具有高精度和召回率;其精确率显著超过基于规则的medspaCy系统。使用该系统将改善从文本报告中提取数据,以支持信息检索、人工智能模型训练和因果关系发现。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c126/12092861/f0f212d8d1ac/10278_2024_1274_Fig1_HTML.jpg

相似文献

1
A Large Language Model to Detect Negated Expressions in Radiology Reports.
J Imaging Inform Med. 2025 Jun;38(3):1297-1303. doi: 10.1007/s10278-024-01274-9. Epub 2024 Sep 25.
5
6
Automatic detection of actionable radiology reports using bidirectional encoder representations from transformers.
BMC Med Inform Decis Mak. 2021 Sep 11;21(1):262. doi: 10.1186/s12911-021-01623-6.
7
Extracting Pulmonary Nodules and Nodule Characteristics from Radiology Reports of Lung Cancer Screening Patients Using Transformer Models.
J Healthc Inform Res. 2024 May 17;8(3):463-477. doi: 10.1007/s41666-024-00166-5. eCollection 2024 Sep.
9
Does BERT need domain adaptation for clinical negation detection?
J Am Med Inform Assoc. 2020 Apr 1;27(4):584-591. doi: 10.1093/jamia/ocaa001.

本文引用的文献

1
Classification of Diagnostic Certainty in Radiology Reports with Deep Learning.
Stud Health Technol Inform. 2024 Jan 25;310:569-573. doi: 10.3233/SHTI231029.
2
Large language models in medicine.
Nat Med. 2023 Aug;29(8):1930-1940. doi: 10.1038/s41591-023-02448-8. Epub 2023 Jul 17.
3
Automated detection of causal relationships among diseases and imaging findings in textual radiology reports.
J Am Med Inform Assoc. 2023 Sep 25;30(10):1701-1706. doi: 10.1093/jamia/ocad119.
4
Clinical named entity recognition and relation extraction using natural language processing of medical free text: A systematic review.
Int J Med Inform. 2023 Sep;177:105122. doi: 10.1016/j.ijmedinf.2023.105122. Epub 2023 Jun 5.
6
BERT-based Transfer Learning in Sentence-level Anatomic Classification of Free-Text Radiology Reports.
Radiol Artif Intell. 2023 Feb 15;5(2):e220097. doi: 10.1148/ryai.220097. eCollection 2023 Mar.
7
Negation detection in Dutch clinical texts: an evaluation of rule-based and machine learning methods.
BMC Bioinformatics. 2023 Jan 9;24(1):10. doi: 10.1186/s12859-022-05130-x.
8
Information extraction from electronic medical documents: state of the art and future research directions.
Knowl Inf Syst. 2023;65(2):463-516. doi: 10.1007/s10115-022-01779-1. Epub 2022 Nov 8.
9
Deep Learning-based Assessment of Oncologic Outcomes from Natural Language Processing of Structured Radiology Reports.
Radiol Artif Intell. 2022 Jul 20;4(5):e220055. doi: 10.1148/ryai.220055. eCollection 2022 Sep.
10
Applications of natural language processing in radiology: A systematic review.
Int J Med Inform. 2022 Jul;163:104779. doi: 10.1016/j.ijmedinf.2022.104779. Epub 2022 Apr 26.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验