• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
A Hybrid Deep Learning Approach for Spatial Trigger Extraction from Radiology Reports.一种用于从放射学报告中提取空间触发词的混合深度学习方法。
Proc Conf Empir Methods Nat Lang Process. 2020 Nov;2020:50-55. doi: 10.18653/v1/2020.splu-1.6.
2
Understanding spatial language in radiology: Representation framework, annotation, and spatial relation extraction from chest X-ray reports using deep learning.理解放射学中的空间语言:使用深度学习从胸部X光报告中进行表示框架、标注和空间关系提取。
J Biomed Inform. 2020 Aug;108:103473. doi: 10.1016/j.jbi.2020.103473. Epub 2020 Jun 18.
3
Fine-grained spatial information extraction in radiology as two-turn question answering.放射学中细粒度空间信息提取作为两阶段问答
Int J Med Inform. 2021 Nov 6;158:104628. doi: 10.1016/j.ijmedinf.2021.104628.
4
Rad-SpatialNet: A Frame-based Resource for Fine-Grained Spatial Relations in Radiology Reports.Rad-SpatialNet:用于放射学报告中细粒度空间关系的基于框架的资源。
LREC Int Conf Lang Resour Eval. 2020 May;2020:2251-2260.
5
Extracting comprehensive clinical information for breast cancer using deep learning methods.利用深度学习方法提取乳腺癌全面临床信息。
Int J Med Inform. 2019 Dec;132:103985. doi: 10.1016/j.ijmedinf.2019.103985. Epub 2019 Oct 2.
6
Performance of a Machine Learning Classifier of Knee MRI Reports in Two Large Academic Radiology Practices: A Tool to Estimate Diagnostic Yield.在两家大型学术放射科实践中膝关节MRI报告的机器学习分类器性能:一种估计诊断率的工具
AJR Am J Roentgenol. 2017 Apr;208(4):750-753. doi: 10.2214/AJR.16.16128. Epub 2017 Jan 31.
7
Identification of Long Bone Fractures in Radiology Reports Using Natural Language Processing to support Healthcare Quality Improvement.利用自然语言处理技术识别放射学报告中的长骨骨折以支持医疗质量改进
Appl Clin Inform. 2016 Nov 9;7(4):1051-1068. doi: 10.4338/ACI-2016-08-RA-0129.
8
A dataset of chest X-ray reports annotated with Spatial Role Labeling annotations.一个带有空间角色标注注释的胸部X光报告数据集。
Data Brief. 2020 Jul 25;32:106056. doi: 10.1016/j.dib.2020.106056. eCollection 2020 Oct.
9
Adapting Bidirectional Encoder Representations from Transformers (BERT) to Assess Clinical Semantic Textual Similarity: Algorithm Development and Validation Study.改编来自Transformer的双向编码器表征(BERT)以评估临床语义文本相似性:算法开发与验证研究。
JMIR Med Inform. 2021 Feb 3;9(2):e22795. doi: 10.2196/22795.
10
Basics of Deep Learning: A Radiologist's Guide to Understanding Published Radiology Articles on Deep Learning.深度学习基础:放射科医师理解深度学习相关放射学文献的指南。
Korean J Radiol. 2020 Jan;21(1):33-41. doi: 10.3348/kjr.2019.0312.

引用本文的文献

1
A scoping review of large language model based approaches for information extraction from radiology reports.基于大语言模型从放射学报告中提取信息的方法的范围综述。
NPJ Digit Med. 2024 Aug 24;7(1):222. doi: 10.1038/s41746-024-01219-0.
2
Advancing medical imaging with language models: featuring a spotlight on ChatGPT.利用语言模型推动医学成像发展:聚焦ChatGPT
Phys Med Biol. 2024 May 3;69(10):10TR01. doi: 10.1088/1361-6560/ad387d.
3
Weakly supervised spatial relation extraction from radiology reports.从放射学报告中进行弱监督空间关系提取。
JAMIA Open. 2023 Apr 22;6(2):ooad027. doi: 10.1093/jamiaopen/ooad027. eCollection 2023 Jul.
4
Application of a Domain-specific BERT for Detection of Speech Recognition Errors in Radiology Reports.特定领域的BERT在放射学报告语音识别错误检测中的应用。
Radiol Artif Intell. 2022 May 25;4(4):e210185. doi: 10.1148/ryai.210185. eCollection 2022 Jul.
5
Fine-grained spatial information extraction in radiology as two-turn question answering.放射学中细粒度空间信息提取作为两阶段问答
Int J Med Inform. 2021 Nov 6;158:104628. doi: 10.1016/j.ijmedinf.2021.104628.

本文引用的文献

1
A dataset of chest X-ray reports annotated with Spatial Role Labeling annotations.一个带有空间角色标注注释的胸部X光报告数据集。
Data Brief. 2020 Jul 25;32:106056. doi: 10.1016/j.dib.2020.106056. eCollection 2020 Oct.
2
Rad-SpatialNet: A Frame-based Resource for Fine-Grained Spatial Relations in Radiology Reports.Rad-SpatialNet:用于放射学报告中细粒度空间关系的基于框架的资源。
LREC Int Conf Lang Resour Eval. 2020 May;2020:2251-2260.
3
Understanding spatial language in radiology: Representation framework, annotation, and spatial relation extraction from chest X-ray reports using deep learning.理解放射学中的空间语言:使用深度学习从胸部X光报告中进行表示框架、标注和空间关系提取。
J Biomed Inform. 2020 Aug;108:103473. doi: 10.1016/j.jbi.2020.103473. Epub 2020 Jun 18.
4
A hybrid deep learning framework for bacterial named entity recognition with domain features.一种具有领域特征的细菌命名实体识别的混合深度学习框架。
BMC Bioinformatics. 2019 Dec 2;20(Suppl 16):583. doi: 10.1186/s12859-019-3071-3.
5
Enhancing clinical concept extraction with contextual embeddings.利用上下文嵌入增强临床概念提取。
J Am Med Inform Assoc. 2019 Nov 1;26(11):1297-1304. doi: 10.1093/jamia/ocz096.
6
MIMIC-III, a freely accessible critical care database.MIMIC-III,一个免费获取的重症监护数据库。
Sci Data. 2016 May 24;3:160035. doi: 10.1038/sdata.2016.35.
7
Extracting actionable findings of appendicitis from radiology reports using natural language processing.使用自然语言处理从放射学报告中提取阑尾炎的可操作发现。
AMIA Jt Summits Transl Sci Proc. 2013 Mar 18;2013:221. eCollection 2013.
8
A machine learning approach for identifying anatomical locations of actionable findings in radiology reports.一种用于识别放射学报告中可采取行动的发现的解剖位置的机器学习方法。
AMIA Annu Symp Proc. 2012;2012:779-88. Epub 2012 Nov 3.

一种用于从放射学报告中提取空间触发词的混合深度学习方法。

A Hybrid Deep Learning Approach for Spatial Trigger Extraction from Radiology Reports.

作者信息

Datta Surabhi, Roberts Kirk

机构信息

School of Biomedical Informatics, University of Texas Health Science Center at Houston Houston TX, USA.

出版信息

Proc Conf Empir Methods Nat Lang Process. 2020 Nov;2020:50-55. doi: 10.18653/v1/2020.splu-1.6.

DOI:10.18653/v1/2020.splu-1.6
PMID:33336212
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7744270/
Abstract

Radiology reports contain important clinical information about patients which are often tied through spatial expressions. Spatial expressions (or triggers) are mainly used to describe the positioning of radiographic findings or medical devices with respect to some anatomical structures. As the expressions result from the mental visualization of the radiologist's interpretations, they are varied and complex. The focus of this work is to automatically identify the spatial expression terms from three different radiology sub-domains. We propose a hybrid deep learning-based NLP method that includes - 1) generating a set of candidate spatial triggers by exact match with the known trigger terms from the training data, 2) applying domain-specific constraints to filter the candidate triggers, and 3) utilizing a BERT-based classifier to predict whether a candidate trigger is a true spatial trigger or not. The results are promising, with an improvement of 24 points in the average F1 measure compared to a standard BERT-based sequence labeler.

摘要

放射学报告包含有关患者的重要临床信息,这些信息通常通过空间表达联系起来。空间表达(或触发词)主要用于描述影像学检查结果或医疗设备相对于某些解剖结构的位置。由于这些表达是放射科医生解读的心理可视化结果,所以它们多样且复杂。这项工作的重点是从三个不同的放射学子领域中自动识别空间表达术语。我们提出了一种基于深度学习的混合自然语言处理方法,该方法包括:1)通过与训练数据中的已知触发词进行精确匹配来生成一组候选空间触发词;2)应用特定领域的约束来筛选候选触发词;3)利用基于BERT的分类器来预测候选触发词是否为真正的空间触发词。结果很有前景,与基于标准BERT的序列标注器相比,平均F1度量提高了24分。