From the Clinic for Diagnostic and Interventional Radiology (M.A.F., A.B., M.M., J.K., L.D., C.P.H., H.U.K., T.F.W.) and Department of Radiation Oncology (C.A.F.), University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; Translational Lung Research Center Heidelberg, Member of the German Center for Lung Research, Heidelberg, Germany (M.A.F., A.B., L.D., C.P.H., H.U.K., T.F.W.); and Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Heidelberg Thoracic Clinic, University of Heidelberg, Heidelberg, Germany (C.P.H.).
Radiology. 2023 Sep;308(3):e231362. doi: 10.1148/radiol.231362.
Background The latest large language models (LLMs) solve unseen problems via user-defined text prompts without the need for retraining, offering potentially more efficient information extraction from free-text medical records than manual annotation. Purpose To compare the performance of the LLMs ChatGPT and GPT-4 in data mining and labeling oncologic phenotypes from free-text CT reports on lung cancer by using user-defined prompts. Materials and Methods This retrospective study included patients who underwent lung cancer follow-up CT between September 2021 and March 2023. A subset of 25 reports was reserved for prompt engineering to instruct the LLMs in extracting lesion diameters, labeling metastatic disease, and assessing oncologic progression. This output was fed into a rule-based natural language processing pipeline to match ground truth annotations from four radiologists and derive performance metrics. The oncologic reasoning of LLMs was rated on a five-point Likert scale for factual correctness and accuracy. The occurrence of confabulations was recorded. Statistical analyses included Wilcoxon signed rank and McNemar tests. Results On 424 CT reports from 424 patients (mean age, 65 years ± 11 [SD]; 265 male), GPT-4 outperformed ChatGPT in extracting lesion parameters (98.6% vs 84.0%, < .001), resulting in 96% correctly mined reports (vs 67% for ChatGPT, < .001). GPT-4 achieved higher accuracy in identification of metastatic disease (98.1% [95% CI: 97.7, 98.5] vs 90.3% [95% CI: 89.4, 91.0]) and higher performance in generating correct labels for oncologic progression (F1 score, 0.96 [95% CI: 0.94, 0.98] vs 0.91 [95% CI: 0.89, 0.94]) (both < .001). In oncologic reasoning, GPT-4 had higher Likert scale scores for factual correctness (4.3 vs 3.9) and accuracy (4.4 vs 3.3), with a lower rate of confabulation (1.7% vs 13.7%) than ChatGPT (all < .001). Conclusion When using user-defined prompts, GPT-4 outperformed ChatGPT in extracting oncologic phenotypes from free-text CT reports on lung cancer and demonstrated better oncologic reasoning with fewer confabulations. © RSNA, 2023 See also the editorial by Hafezi-Nejad and Trivedi in this issue.
背景 最新的大型语言模型 (LLM) 通过用户定义的文本提示解决未见问题,而无需重新训练,从而有可能比手动注释更有效地从自由文本医疗记录中提取信息。目的 通过使用用户定义的提示,比较 LLM ChatGPT 和 GPT-4 在从肺癌 CT 报告的自由文本中挖掘数据和标记肿瘤表型方面的性能。材料和方法 本回顾性研究纳入了 2021 年 9 月至 2023 年 3 月期间接受肺癌随访 CT 的患者。保留了 25 份报告用于提示工程,以指导 LLM 提取病变直径、标记转移性疾病和评估肿瘤进展。该输出被输入到基于规则的自然语言处理管道中,以匹配来自四名放射科医生的地面实况注释并得出性能指标。LLM 的肿瘤推理按五点李克特量表评估事实正确性和准确性。记录了虚构的发生。统计分析包括 Wilcoxon 符号秩和 McNemar 检验。结果 在 424 名患者(平均年龄,65 岁±11 [SD];265 名男性)的 424 份 CT 报告中,GPT-4 在提取病变参数方面优于 ChatGPT(98.6% 对 84.0%,<.001),导致 96%的报告被正确挖掘(而 ChatGPT 为 67%,<.001)。GPT-4 在识别转移性疾病方面具有更高的准确性(98.1%[95%CI:97.7,98.5]对 90.3%[95%CI:89.4,91.0]),并且在生成肿瘤进展正确标签方面具有更高的性能(F1 评分,0.96[95%CI:0.94,0.98]对 0.91[95%CI:0.89,0.94])(均<.001)。在肿瘤推理方面,GPT-4 的事实正确性(4.3 对 3.9)和准确性(4.4 对 3.3)的李克特量表评分更高,虚构发生率(1.7%对 13.7%)更低,比 ChatGPT 低(均<.001)。结论 在使用用户定义的提示时,GPT-4 在从肺癌 CT 报告的自由文本中提取肿瘤表型方面优于 ChatGPT,并且具有更好的肿瘤推理能力,虚构发生率更低。