Institute for Systems Biology, Seattle, WA, United States.
Providence Health & Services, Renton, WA, United States.
J Med Internet Res. 2024 Nov 19;26:e63445. doi: 10.2196/63445.
BACKGROUND: Social determinants of health (SDoH) such as housing insecurity are known to be intricately linked to patients' health status. More efficient methods for abstracting structured data on SDoH can help accelerate the inclusion of exposome variables in biomedical research and support health care systems in identifying patients who could benefit from proactive outreach. Large language models (LLMs) developed from Generative Pre-trained Transformers (GPTs) have shown potential for performing complex abstraction tasks on unstructured clinical notes. OBJECTIVE: Here, we assess the performance of GPTs on identifying temporal aspects of housing insecurity and compare results between both original and deidentified notes. METHODS: We compared the ability of GPT-3.5 and GPT-4 to identify instances of both current and past housing instability, as well as general housing status, from 25,217 notes from 795 pregnant women. Results were compared with manual abstraction, a named entity recognition model, and regular expressions. RESULTS: Compared with GPT-3.5 and the named entity recognition model, GPT-4 had the highest performance and had a much higher recall (0.924) than human abstractors (0.702) in identifying patients experiencing current or past housing instability, although precision was lower (0.850) compared with human abstractors (0.971). GPT-4's precision improved slightly (0.936 original, 0.939 deidentified) on deidentified versions of the same notes, while recall dropped (0.781 original, 0.704 deidentified). CONCLUSIONS: This work demonstrates that while manual abstraction is likely to yield slightly more accurate results overall, LLMs can provide a scalable, cost-effective solution with the advantage of greater recall. This could support semiautomated abstraction, but given the potential risk for harm, human review would be essential before using results for any patient engagement or care decisions. Furthermore, recall was lower when notes were deidentified prior to LLM abstraction.
背景:健康的社会决定因素(如住房无保障)与患者的健康状况密切相关,这是众所周知的。更有效的方法来提取关于社会决定因素的结构化数据,可以帮助加速外显子组变量纳入生物医学研究,并支持医疗保健系统识别可能受益于主动外展的患者。基于生成式预训练转换器(Generative Pre-trained Transformers,GPTs)开发的大型语言模型(Large language models,LLMs)已显示出在非结构化临床记录上执行复杂抽象任务的潜力。
目的:在这里,我们评估 GPT 在识别住房无保障的时间方面的性能,并比较原始和去识别记录之间的结果。
方法:我们比较了 GPT-3.5 和 GPT-4 从 795 名孕妇的 25217 份记录中识别当前和过去住房不稳定以及一般住房状况实例的能力。结果与人工抽象、命名实体识别模型和正则表达式进行了比较。
结果:与 GPT-3.5 和命名实体识别模型相比,GPT-4 在识别当前或过去住房不稳定的患者方面具有最高的性能,召回率(0.924)明显高于人工抽象者(0.702),尽管精度较低(0.850)。在相同记录的去识别版本上,GPT-4 的精度略有提高(0.936 原始,0.939 去识别),而召回率下降(0.781 原始,0.704 去识别)。
结论:这项工作表明,虽然人工抽象可能总体上产生更准确的结果,但 LLM 可以提供一种可扩展、具有成本效益的解决方案,具有更高的召回率优势。这可以支持半自动抽象,但考虑到潜在的伤害风险,在将结果用于任何患者参与或护理决策之前,人工审查是必不可少的。此外,在对 LLM 抽象之前对记录进行去识别时,召回率较低。
BMC Med Res Methodol. 2025-1-28
Proc Mach Learn Res. 2024-6
NPJ Digit Med. 2024-1-11
NPJ Digit Med. 2023-11-30
Lancet Digit Health. 2023-9
JAMA Netw Open. 2023-7-3
Nature. 2023-8
J Am Med Inform Assoc. 2023-7-19
NPJ Digit Med. 2022-12-26