Suppr超能文献

评估电子健康记录中的行话术语识别中的专家-非专业人士一致性:观察性研究。

Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study.

机构信息

Department of Information Technology, Analytics, and Operations, Mendoza College of Business, University of Notre Dame, Notre Dame, IN, United States.

Center for Biomedical and Health Research in Data Sciences, Miner School of Computer and Information Sciences, University of Massachusetts Lowell, Lowell, MA, United States.

出版信息

J Med Internet Res. 2024 Oct 15;26:e49704. doi: 10.2196/49704.

Abstract

BACKGROUND

Studies have shown that patients have difficulty understanding medical jargon in electronic health record (EHR) notes, particularly patients with low health literacy. In creating the NoteAid dictionary of medical jargon for patients, a panel of medical experts selected terms they perceived as needing definitions for patients.

OBJECTIVE

This study aims to determine whether experts and laypeople agree on what constitutes medical jargon.

METHODS

Using an observational study design, we compared the ability of medical experts and laypeople to identify medical jargon in EHR notes. The laypeople were recruited from Amazon Mechanical Turk. Participants were shown 20 sentences from EHR notes, which contained 325 potential jargon terms as identified by the medical experts. We collected demographic information about the laypeople's age, sex, race or ethnicity, education, native language, and health literacy. Health literacy was measured with the Single Item Literacy Screener. Our evaluation metrics were the proportion of terms rated as jargon, sensitivity, specificity, Fleiss κ for agreement among medical experts and among laypeople, and the Kendall rank correlation statistic between the medical experts and laypeople. We performed subgroup analyses by layperson characteristics. We fit a beta regression model with a logit link to examine the association between layperson characteristics and whether a term was classified as jargon.

RESULTS

The average proportion of terms identified as jargon by the medical experts was 59% (1150/1950, 95% CI 56.1%-61.8%), and the average proportion of terms identified as jargon by the laypeople overall was 25.6% (22,480/87,750, 95% CI 25%-26.2%). There was good agreement among medical experts (Fleiss κ=0.781, 95% CI 0.753-0.809) and fair agreement among laypeople (Fleiss κ=0.590, 95% CI 0.589-0.591). The beta regression model had a pseudo-R of 0.071, indicating that demographic characteristics explained very little of the variability in the proportion of terms identified as jargon by laypeople. Using laypeople's identification of jargon as the gold standard, the medical experts had high sensitivity (91.7%, 95% CI 90.1%-93.3%) and specificity (88.2%, 95% CI 86%-90.5%) in identifying jargon terms.

CONCLUSIONS

To ensure coverage of possible jargon terms, the medical experts were loose in selecting terms for inclusion. Fair agreement among laypersons shows that this is needed, as there is a variety of opinions among laypersons about what is considered jargon. We showed that medical experts could accurately identify jargon terms for annotation that would be useful for laypeople.

摘要

背景

研究表明,患者在理解电子健康记录 (EHR) 中的医学术语时存在困难,尤其是那些健康素养较低的患者。在为患者创建 NoteAid 医学术语词典时,一组医学专家选择了他们认为需要为患者定义的术语。

目的

本研究旨在确定专家和非专业人士是否对什么是医学术语有共识。

方法

采用观察性研究设计,我们比较了医学专家和非专业人士识别 EHR 记录中医学术语的能力。非专业人士是从亚马逊 Mechanical Turk 招募的。参与者观看了 20 个来自 EHR 记录的句子,其中包含了医学专家确定的 325 个潜在的术语。我们收集了关于非专业人士年龄、性别、种族或民族、教育程度、母语和健康素养的人口统计学信息。健康素养用单项读写能力筛查器进行衡量。我们的评估指标是被评定为术语的术语比例、敏感度、特异性、医学专家之间和非专业人士之间的 Fleiss κ 一致性、以及医学专家和非专业人士之间的 Kendall 等级相关统计量。我们按非专业人士的特征进行了亚组分析。我们拟合了一个具有 logit 链接的 beta 回归模型,以检验非专业人士特征与术语被归类为术语之间的关联。

结果

医学专家平均识别出 59%(1150/1950,95%CI 56.1%-61.8%)的术语为术语,总体而言,非专业人士平均识别出 25.6%(22480/87750,95%CI 25%-26.2%)的术语为术语。医学专家之间有很好的一致性(Fleiss κ=0.781,95%CI 0.753-0.809),非专业人士之间有一般的一致性(Fleiss κ=0.590,95%CI 0.589-0.591)。贝塔回归模型的伪 R 为 0.071,表明人口统计学特征仅能解释非专业人士识别出的术语比例的很小一部分差异。使用非专业人士对术语的识别作为金标准,医学专家在识别术语方面具有很高的敏感度(91.7%,95%CI 90.1%-93.3%)和特异性(88.2%,95%CI 86%-90.5%)。

结论

为了确保涵盖可能的术语,医学专家在选择纳入的术语时较为宽松。非专业人士之间的一致性表明这是必要的,因为非专业人士对什么是术语有各种各样的看法。我们表明,医学专家可以准确识别出对非专业人士有用的注释的术语。

相似文献

5
Sertindole for schizophrenia.用于治疗精神分裂症的舍吲哚。
Cochrane Database Syst Rev. 2005 Jul 20;2005(3):CD001715. doi: 10.1002/14651858.CD001715.pub2.

本文引用的文献

2
Laypeople's (Mis)Understanding of Common Medical Acronyms.一般医疗缩写术语的大众误解。
Hosp Pediatr. 2023 Oct 1;13(10):e269-e273. doi: 10.1542/hpeds.2023-007282.
3
6
Jargon Be Gone - Patient Preference in Doctor Communication.告别行话——医患沟通中的患者偏好
J Patient Exp. 2023 Feb 28;10:23743735231158942. doi: 10.1177/23743735231158942. eCollection 2023.
8
ChatGPT: the future of discharge summaries?ChatGPT:出院小结的未来?
Lancet Digit Health. 2023 Mar;5(3):e107-e108. doi: 10.1016/S2589-7500(23)00021-3. Epub 2023 Feb 6.
9
Accuracy in Patient Understanding of Common Medical Phrases.患者对常见医学用语理解的准确性。
JAMA Netw Open. 2022 Nov 1;5(11):e2242972. doi: 10.1001/jamanetworkopen.2022.42972.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验