• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度学习技术的一刀切分类器在临床缩写中的应用。

Disambiguating Clinical Abbreviations Using a One-Fits-All Classifier Based on Deep Learning Techniques.

机构信息

Applied Computing Department, Palestine Technical University - Kadoorie, Tulkarem, Palestine.

Department of Computer Science, Universidad Carlos III de Madrid, Leganés, Spain.

出版信息

Methods Inf Med. 2022 Jun;61(S 01):e28-e34. doi: 10.1055/s-0042-1742388. Epub 2022 Feb 1.

DOI:10.1055/s-0042-1742388
PMID:35104909
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9246508/
Abstract

BACKGROUND

Abbreviations are considered an essential part of the clinical narrative; they are used not only to save time and space but also to hide serious or incurable illnesses. Misreckoning interpretation of the clinical abbreviations could affect different aspects concerning patients themselves or other services like clinical support systems. There is no consensus in the scientific community to create new abbreviations, making it difficult to understand them. Disambiguate clinical abbreviations aim to predict the exact meaning of the abbreviation based on context, a crucial step in understanding clinical notes.

OBJECTIVES

Disambiguating clinical abbreviations is an essential task in information extraction from medical texts. Deep contextualized representations models showed promising results in most word sense disambiguation tasks. In this work, we propose a one-fits-all classifier to disambiguate clinical abbreviations with deep contextualized representation from pretrained language models like Bidirectional Encoder Representation from Transformers (BERT).

METHODS

A set of experiments with different pretrained clinical BERT models were performed to investigate fine-tuning methods on the disambiguation of clinical abbreviations. One-fits-all classifiers were used to improve disambiguating rare clinical abbreviations.

RESULTS

One-fits-all classifiers with deep contextualized representations from Bioclinical, BlueBERT, and MS_BERT pretrained models improved the accuracy using the University of Minnesota data set. The model achieved 98.99, 98.75, and 99.13%, respectively. All the models outperform the state-of-the-art in the previous work of around 98.39%, with the best accuracy using the MS_BERT model.

CONCLUSION

Deep contextualized representations via fine-tuning of pretrained language modeling proved its sufficiency on disambiguating clinical abbreviations; it could be robust for rare and unseen abbreviations and has the advantage of avoiding building a separate classifier for each abbreviation. Transfer learning can improve the development of practical abbreviation disambiguation systems.

摘要

背景

缩写被认为是临床叙述的重要组成部分;它们不仅用于节省时间和空间,还用于隐藏严重或无法治愈的疾病。错误解读临床缩写可能会影响到与患者自身或其他服务相关的不同方面,例如临床支持系统。科学界对于创建新的缩写没有达成共识,这使得理解它们变得困难。临床缩写消歧旨在根据上下文预测缩写的确切含义,这是理解临床记录的关键步骤。

目的

在从医学文本中提取信息时,临床缩写的消歧是一项重要任务。深度上下文表示模型在大多数词义消歧任务中都取得了有希望的结果。在这项工作中,我们提出了一种适用于所有情况的分类器,该分类器使用来自预训练语言模型(如 Transformer 双向编码器表示(Bidirectional Encoder Representation from Transformers,BERT))的深度上下文表示来对临床缩写进行消歧。

方法

对不同的预训练临床 BERT 模型进行了一系列实验,以研究微调方法在临床缩写消歧方面的效果。使用适用于所有情况的分类器来提高对罕见临床缩写的消歧能力。

结果

使用 Bioclinical、BlueBERT 和 MS_BERT 预训练模型的深度上下文表示的适用于所有情况的分类器,使用明尼苏达大学数据集提高了准确性。模型的准确率分别为 98.99%、98.75%和 99.13%。所有模型的表现都优于之前工作的 98.39%左右,使用 MS_BERT 模型的准确率最高。

结论

通过预训练语言模型的微调获得的深度上下文表示证明了其在临床缩写消歧方面的有效性;它可以稳健地处理罕见和未见过的缩写,并且具有避免为每个缩写构建单独分类器的优势。迁移学习可以提高实用缩写消歧系统的开发。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c59/9246508/9b34580336cf/10-1055-s-0042-1742388-i21010102-3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c59/9246508/2404b0a20cf1/10-1055-s-0042-1742388-i21010102-1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c59/9246508/8780bd4b8018/10-1055-s-0042-1742388-i21010102-2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c59/9246508/9b34580336cf/10-1055-s-0042-1742388-i21010102-3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c59/9246508/2404b0a20cf1/10-1055-s-0042-1742388-i21010102-1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c59/9246508/8780bd4b8018/10-1055-s-0042-1742388-i21010102-2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c59/9246508/9b34580336cf/10-1055-s-0042-1742388-i21010102-3.jpg

相似文献

1
Disambiguating Clinical Abbreviations Using a One-Fits-All Classifier Based on Deep Learning Techniques.基于深度学习技术的一刀切分类器在临床缩写中的应用。
Methods Inf Med. 2022 Jun;61(S 01):e28-e34. doi: 10.1055/s-0042-1742388. Epub 2022 Feb 1.
2
Disambiguating Clinical Abbreviations by One-to-All Classification: Algorithm Development and Validation Study.通过一对一分类法对临床缩写进行消歧:算法开发和验证研究。
JMIR Med Inform. 2024 Oct 1;12:e56955. doi: 10.2196/56955.
3
Extracting comprehensive clinical information for breast cancer using deep learning methods.利用深度学习方法提取乳腺癌全面临床信息。
Int J Med Inform. 2019 Dec;132:103985. doi: 10.1016/j.ijmedinf.2019.103985. Epub 2019 Oct 2.
4
Use of BERT (Bidirectional Encoder Representations from Transformers)-Based Deep Learning Method for Extracting Evidences in Chinese Radiology Reports: Development of a Computer-Aided Liver Cancer Diagnosis Framework.基于 BERT(来自 Transformers 的双向编码器表示)的深度学习方法在提取中文放射学报告证据中的应用:计算机辅助肝癌诊断框架的开发。
J Med Internet Res. 2021 Jan 12;23(1):e19689. doi: 10.2196/19689.
5
Improving clinical abbreviation sense disambiguation using attention-based Bi-LSTM and hybrid balancing techniques in imbalanced datasets.基于注意力机制的 Bi-LSTM 和混合平衡技术在不平衡数据集上提高临床缩写词消歧
J Eval Clin Pract. 2024 Oct;30(7):1327-1336. doi: 10.1111/jep.14041. Epub 2024 Jun 21.
6
A long journey to short abbreviations: developing an open-source framework for clinical abbreviation recognition and disambiguation (CARD).从冗长表述到简短缩写的漫长历程:开发一个用于临床缩写识别与消歧的开源框架(CARD)
J Am Med Inform Assoc. 2017 Apr 1;24(e1):e79-e86. doi: 10.1093/jamia/ocw109.
7
Clinical Abbreviation Disambiguation Using Deep Contextualized Representation.使用深度情境化表示法消除临床缩写词的歧义
Stud Health Technol Inform. 2020 Jun 16;270:88-92. doi: 10.3233/SHTI200128.
8
A Fine-Tuned Bidirectional Encoder Representations From Transformers Model for Food Named-Entity Recognition: Algorithm Development and Validation.基于 Transformer 的双向编码器表示模型的精细调整在食品命名实体识别中的应用:算法开发与验证。
J Med Internet Res. 2021 Aug 9;23(8):e28229. doi: 10.2196/28229.
9
Relation Classification for Bleeding Events From Electronic Health Records Using Deep Learning Systems: An Empirical Study.使用深度学习系统对电子健康记录中的出血事件进行关系分类:一项实证研究。
JMIR Med Inform. 2021 Jul 2;9(7):e27527. doi: 10.2196/27527.
10
Leveraging Large Language Models for Clinical Abbreviation Disambiguation.利用大型语言模型进行临床缩写词消歧。
J Med Syst. 2024 Feb 27;48(1):27. doi: 10.1007/s10916-024-02049-z.

引用本文的文献

1
Deciphering Abbreviations in Malaysian Clinical Notes Using Machine Learning.使用机器学习解读马来西亚临床记录中的缩写
Methods Inf Med. 2024 Dec;63(5-06):195-202. doi: 10.1055/a-2521-4372. Epub 2025 Jan 22.
2
Clinical entity augmented retrieval for clinical information extraction.用于临床信息提取的临床实体增强检索
NPJ Digit Med. 2025 Jan 19;8(1):45. doi: 10.1038/s41746-024-01377-1.
3
Health Care Language Models and Their Fine-Tuning for Information Extraction: Scoping Review.医疗保健语言模型及其在信息提取方面的微调:范围综述。

本文引用的文献

1
Zero-Shot Clinical Acronym Expansion via Latent Meaning Cells.通过潜在意义细胞实现零样本临床首字母缩略词扩展
Proc Mach Learn Res. 2020 Dec;136:12-40.
2
The CLASSE GATOR (CLinical Acronym SenSE disambiGuATOR): A Method for predicting acronym sense from neonatal clinical notes.CLASSE GATOR(临床缩略语感知歧义消除器):一种从新生儿临床记录中预测缩略语含义的方法。
Int J Med Inform. 2020 May;137:104101. doi: 10.1016/j.ijmedinf.2020.104101. Epub 2020 Feb 14.
3
BioBERT: a pre-trained biomedical language representation model for biomedical text mining.
JMIR Med Inform. 2024 Oct 21;12:e60164. doi: 10.2196/60164.
4
Processing of Short-Form Content in Clinical Narratives: Systematic Scoping Review.临床叙事中短格式内容的处理:系统范围综述。
J Med Internet Res. 2024 Sep 26;26:e57852. doi: 10.2196/57852.
5
Clinical Note Structural Knowledge Improves Word Sense Disambiguation.临床笔记结构知识可改善词义消歧。
AMIA Jt Summits Transl Sci Proc. 2024 May 31;2024:515-524. eCollection 2024.
6
Leveraging Large Language Models for Clinical Abbreviation Disambiguation.利用大型语言模型进行临床缩写词消歧。
J Med Syst. 2024 Feb 27;48(1):27. doi: 10.1007/s10916-024-02049-z.
7
O2 supplementation disambiguation in clinical narratives to support retrospective COVID-19 studies.在临床叙述中对 O2 补充的歧义进行澄清,以支持回顾性 COVID-19 研究。
BMC Med Inform Decis Mak. 2024 Jan 31;24(1):29. doi: 10.1186/s12911-024-02425-2.
8
Sequence Labeling for Disambiguating Medical Abbreviations.用于消除医学缩写歧义的序列标注
J Healthc Inform Res. 2023 Sep 14;7(4):501-526. doi: 10.1007/s41666-023-00146-1. eCollection 2023 Dec.
9
Deciphering clinical abbreviations with a privacy protecting machine learning system.使用具有隐私保护功能的机器学习系统破译临床缩写。
Nat Commun. 2022 Dec 2;13(1):7456. doi: 10.1038/s41467-022-35007-9.
BioBERT:一种用于生物医学文本挖掘的预训练生物医学语言表示模型。
Bioinformatics. 2020 Feb 15;36(4):1234-1240. doi: 10.1093/bioinformatics/btz682.
4
Ambiguous medical abbreviation study: challenges and opportunities.医学缩写词歧义研究:挑战与机遇。
Intern Med J. 2020 Sep;50(9):1073-1078. doi: 10.1111/imj.14442.
5
A method for harmonization of clinical abbreviation and acronym sense inventories.一种协调临床缩写和首字母缩略词意义清单的方法。
J Biomed Inform. 2018 Dec;88:62-69. doi: 10.1016/j.jbi.2018.11.004. Epub 2018 Nov 7.
6
A convolutional route to abbreviation disambiguation in clinical text.一种卷积途径用于临床文本中的缩写歧义消解。
J Biomed Inform. 2018 Oct;86:71-78. doi: 10.1016/j.jbi.2018.07.025. Epub 2018 Aug 15.
7
Towards Comprehensive Clinical Abbreviation Disambiguation Using Machine-Labeled Training Data.利用机器标注训练数据实现临床缩写词的全面消歧
AMIA Annu Symp Proc. 2017 Feb 10;2016:560-569. eCollection 2016.
8
MIMIC-III, a freely accessible critical care database.MIMIC-III,一个免费获取的重症监护数据库。
Sci Data. 2016 May 24;3:160035. doi: 10.1038/sdata.2016.35.
9
Natural Language Processing in Oncology: A Review.自然语言处理在肿瘤学中的应用:综述
JAMA Oncol. 2016 Jun 1;2(6):797-804. doi: 10.1001/jamaoncol.2016.0213.
10
Deep learning in neural networks: an overview.神经网络中的深度学习:综述。
Neural Netw. 2015 Jan;61:85-117. doi: 10.1016/j.neunet.2014.09.003. Epub 2014 Oct 13.