• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
quEHRy: a question answering system to query electronic health records.QueHRy:一个问答系统,用于查询电子健康记录。
J Am Med Inform Assoc. 2023 May 19;30(6):1091-1102. doi: 10.1093/jamia/ocad050.
2
A Semantic Parsing Method for Mapping Clinical Questions to Logical Forms.一种将临床问题映射到逻辑形式的语义解析方法。
AMIA Annu Symp Proc. 2018 Apr 16;2017:1478-1487. eCollection 2017.
3
Toward a Neural Semantic Parsing System for EHR Question Answering.迈向用于电子健康记录问答的神经语义解析系统。
AMIA Annu Symp Proc. 2023 Apr 29;2022:1002-1011. eCollection 2022.
4
Using FHIR to Construct a Corpus of Clinical Questions Annotated with Logical Forms and Answers.使用FHIR构建一个带有逻辑形式和答案注释的临床问题语料库。
AMIA Annu Symp Proc. 2020 Mar 4;2019:1207-1215. eCollection 2019.
5
SemBioNLQA: A semantic biomedical question answering system for retrieving exact and ideal answers to natural language questions.SemBioNLQA:一个语义生物医学问答系统,用于检索自然语言问题的准确和理想答案。
Artif Intell Med. 2020 Jan;102:101767. doi: 10.1016/j.artmed.2019.101767. Epub 2019 Nov 28.
6
Ambiguity in medical concept normalization: An analysis of types and coverage in electronic health record datasets.医学概念规范化中的歧义:电子健康记录数据集的类型和覆盖范围分析。
J Am Med Inform Assoc. 2021 Mar 1;28(3):516-532. doi: 10.1093/jamia/ocaa269.
7
Answering medical questions in Chinese using automatically mined knowledge and deep neural networks: an end-to-end solution.利用自动挖掘的知识和深度神经网络用中文回答医学问题:一种端到端的解决方案。
BMC Bioinformatics. 2022 Apr 15;23(1):136. doi: 10.1186/s12859-022-04658-2.
8
A framework for ontology-based question answering with application to parasite immunology.一个基于本体的问答框架及其在寄生虫免疫学中的应用。
J Biomed Semantics. 2015 Jul 17;6:31. doi: 10.1186/s13326-015-0029-x. eCollection 2015.
9
Development and empirical user-centered evaluation of semantically-based query recommendation for an electronic health record search engine.电子健康记录搜索引擎基于语义的查询推荐的开发与以用户为中心的实证评估
J Biomed Inform. 2017 Mar;67:1-10. doi: 10.1016/j.jbi.2017.01.013. Epub 2017 Jan 25.
10
Annotating Logical Forms for EHR Questions.为电子健康记录问题标注逻辑形式
LREC Int Conf Lang Resour Eval. 2016 May;2016:3772-3778.

引用本文的文献

1
SQL on FHIR - Tabular views of FHIR data using FHIRPath.基于FHIR的SQL——使用FHIRPath的FHIR数据表格视图。
NPJ Digit Med. 2025 Jun 9;8(1):342. doi: 10.1038/s41746-025-01708-w.
2
Clinical insights: A comprehensive review of language models in medicine.临床见解:医学领域语言模型的全面综述
PLOS Digit Health. 2025 May 8;4(5):e0000800. doi: 10.1371/journal.pdig.0000800. eCollection 2025 May.
3
Question Answering for Electronic Health Records: Scoping Review of Datasets and Models.电子健康记录问答:数据集和模型的范围综述。
J Med Internet Res. 2024 Oct 30;26:e53636. doi: 10.2196/53636.

本文引用的文献

1
Toward a Neural Semantic Parsing System for EHR Question Answering.迈向用于电子健康记录问答的神经语义解析系统。
AMIA Annu Symp Proc. 2023 Apr 29;2022:1002-1011. eCollection 2022.
2
A BERT-Based Generation Model to Transform Medical Texts to SQL Queries for Electronic Medical Records: Model Development and Validation.一种基于BERT的生成模型,用于将医学文本转换为电子病历的SQL查询:模型开发与验证
JMIR Med Inform. 2021 Dec 8;9(12):e32698. doi: 10.2196/32698.
3
Fine-grained spatial information extraction in radiology as two-turn question answering.放射学中细粒度空间信息提取作为两阶段问答
Int J Med Inform. 2021 Nov 6;158:104628. doi: 10.1016/j.ijmedinf.2021.104628.
4
Evaluation of patient-level retrieval from electronic health record data for a cohort discovery task.针对队列发现任务,评估从电子健康记录数据中进行患者层面检索的情况。
JAMIA Open. 2020 Jul 26;3(3):395-404. doi: 10.1093/jamiaopen/ooaa026. eCollection 2020 Oct.
5
Question-driven summarization of answers to consumer health questions.面向消费者健康问题答案的问题驱动式总结。
Sci Data. 2020 Oct 2;7(1):322. doi: 10.1038/s41597-020-00667-z.
6
Clinical concept extraction: A methodology review.临床概念提取:方法学综述。
J Biomed Inform. 2020 Sep;109:103526. doi: 10.1016/j.jbi.2020.103526. Epub 2020 Aug 6.
7
The Impact of Specialized Corpora for Word Embeddings in Natural Langage Understanding.专业语料库对自然语言理解中词嵌入的影响。
Stud Health Technol Inform. 2020 Jun 16;270:432-436. doi: 10.3233/SHTI200197.
8
Association of Electronic Health Record Use With Physician Fatigue and Efficiency.电子病历使用与医生疲劳和效率的关系。
JAMA Netw Open. 2020 Jun 1;3(6):e207385. doi: 10.1001/jamanetworkopen.2020.7385.
9
Using FHIR to Construct a Corpus of Clinical Questions Annotated with Logical Forms and Answers.使用FHIR构建一个带有逻辑形式和答案注释的临床问题语料库。
AMIA Annu Symp Proc. 2020 Mar 4;2019:1207-1215. eCollection 2019.
10
Consumer health information and question answering: helping consumers find answers to their health-related information needs.消费者健康信息与问答:帮助消费者寻找与其健康相关的信息需求的答案。
J Am Med Inform Assoc. 2020 Feb 1;27(2):194-201. doi: 10.1093/jamia/ocz152.

QueHRy:一个问答系统,用于查询电子健康记录。

quEHRy: a question answering system to query electronic health records.

机构信息

School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, USA.

出版信息

J Am Med Inform Assoc. 2023 May 19;30(6):1091-1102. doi: 10.1093/jamia/ocad050.

DOI:10.1093/jamia/ocad050
PMID:37087111
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10198534/
Abstract

OBJECTIVE

We propose a system, quEHRy, to retrieve precise, interpretable answers to natural language questions from structured data in electronic health records (EHRs).

MATERIALS AND METHODS

We develop/synthesize the main components of quEHRy: concept normalization (MetaMap), time frame classification (new), semantic parsing (existing), visualization with question understanding (new), and query module for FHIR mapping/processing (new). We evaluate quEHRy on 2 clinical question answering (QA) datasets. We evaluate each component separately as well as holistically to gain deeper insights. We also conduct a thorough error analysis for a crucial subcomponent, medical concept normalization.

RESULTS

Using gold concepts, the precision of quEHRy is 98.33% and 90.91% for the 2 datasets, while the overall accuracy was 97.41% and 87.75%. Precision was 94.03% and 87.79% even after employing an automated medical concept extraction system (MetaMap). Most incorrectly predicted medical concepts were broader in nature than gold-annotated concepts (representative of the ones present in EHRs), eg, Diabetes versus Diabetes Mellitus, Non-Insulin-Dependent.

DISCUSSION

The primary performance barrier to deployment of the system is due to errors in medical concept extraction (a component not studied in this article), which affects the downstream generation of correct logical structures. This indicates the need to build QA-specific clinical concept normalizers that understand EHR context to extract the "relevant" medical concepts from questions.

CONCLUSION

We present an end-to-end QA system that allows information access from EHRs using natural language and returns an exact, verifiable answer. Our proposed system is high-precision and interpretable, checking off the requirements for clinical use.

摘要

目的

我们提出了一个系统 quEHRy,用于从电子健康记录(EHR)中的结构化数据中检索到对自然语言问题的准确、可解释的答案。

材料与方法

我们开发/合成了 quEHRy 的主要组件:概念规范化(MetaMap)、时间框架分类(新)、语义解析(现有)、带有问题理解的可视化(新),以及用于 FHIR 映射/处理的查询模块(新)。我们在 2 个临床问答(QA)数据集上评估了 quEHRy。我们分别评估每个组件以及整体,以获得更深入的见解。我们还对一个关键的子组件,医学概念规范化,进行了彻底的错误分析。

结果

使用黄金概念,quEHRy 的精度在这两个数据集上分别为 98.33%和 90.91%,而整体准确率分别为 97.41%和 87.75%。即使使用了自动化的医学概念提取系统(MetaMap),精度仍然为 94.03%和 87.79%。大多数错误预测的医学概念比黄金标注概念更广泛(代表 EHR 中的概念),例如,Diabetes 与 Diabetes Mellitus、Non-Insulin-Dependent。

讨论

系统部署的主要性能障碍是由于医学概念提取(本文未研究的组件)的错误,这会影响下游正确逻辑结构的生成。这表明需要构建专门用于 QA 的临床概念规范化器,这些规范化器需要理解 EHR 上下文,从问题中提取“相关”的医学概念。

结论

我们提出了一个端到端的 QA 系统,允许使用自然语言从 EHR 中访问信息,并返回一个准确、可验证的答案。我们提出的系统具有高精度和可解释性,满足了临床使用的要求。