Suppr超能文献

评估ChatGPT在评估肾病患者非处方药和补充剂安全性方面的功效。

Evaluating ChatGPT's efficacy in assessing the safety of non-prescription medications and supplements in patients with kidney disease.

作者信息

Sheikh Mohammad S, Barreto Erin F, Miao Jing, Thongprayoon Charat, Gregoire James R, Dreesman Benjamin, Erickson Stephen B, Craici Iasmina M, Cheungpasitporn Wisit

机构信息

Department of Nephrology, Mayo Clinic Minnesota, Rochester, MN, USA.

Department of Pharmacy, Mayo Clinic Minnesota, Rochester, MN, USA.

出版信息

Digit Health. 2024 Apr 17;10:20552076241248082. doi: 10.1177/20552076241248082. eCollection 2024 Jan-Dec.

Abstract

BACKGROUND

This study investigated the efficacy of ChatGPT-3.5 and ChatGPT-4 in assessing drug safety for patients with kidney diseases, comparing their performance to Micromedex, a well-established drug information source. Despite the perception of non-prescription medications and supplements as safe, risks exist, especially for those with kidney issues. The study's goal was to evaluate ChatGPT's versions for their potential in clinical decision-making regarding kidney disease patients.

METHOD

The research involved analyzing 124 common non-prescription medications and supplements using ChatGPT-3.5 and ChatGPT-4 with queries about their safety for people with kidney disease. The AI responses were categorized as "generally safe," "potentially harmful," or "unknown toxicity." Simultaneously, these medications and supplements were assessed in Micromedex using similar categories, allowing for a comparison of the concordance between the two resources.

RESULTS

Micromedex identified 85 (68.5%) medications as generally safe, 35 (28.2%) as potentially harmful, and 4 (3.2%) of unknown toxicity. ChatGPT-3.5 identified 89 (71.8%) as generally safe, 11 (8.9%) as potentially harmful, and 24 (19.3%) of unknown toxicity. GPT-4 identified 82 (66.1%) as generally safe, 29 (23.4%) as potentially harmful, and 13 (10.5%) of unknown toxicity. The overall agreement between Micromedex and ChatGPT-3.5 was 64.5% and ChatGPT-4 demonstrated a higher agreement at 81.4%. Notably, ChatGPT-3.5's suboptimal performance was primarily influenced by a lower concordance rate among supplements, standing at 60.3%. This discrepancy could be attributed to the limited data on supplements within ChatGPT-3.5, with supplements constituting 80% of medications identified as unknown.

CONCLUSION

ChatGPT's capabilities in evaluating the safety of non-prescription drugs and supplements for kidney disease patients are modest compared to established drug information resources. Neither ChatGPT-3.5 nor ChatGPT-4 can be currently recommended as reliable drug information sources for this demographic. The results highlight the need for further improvements in the model's accuracy and reliability in the medical domain.

摘要

背景

本研究调查了ChatGPT-3.5和ChatGPT-4在评估肾病患者药物安全性方面的功效,并将它们的表现与成熟的药物信息来源Micromedex进行比较。尽管非处方药和补充剂被认为是安全的,但仍存在风险,尤其是对肾病患者而言。该研究的目标是评估ChatGPT的不同版本在针对肾病患者的临床决策中的潜力。

方法

该研究使用ChatGPT-3.5和ChatGPT-4分析了124种常见的非处方药和补充剂,并询问它们对肾病患者的安全性。人工智能的回答被归类为“一般安全”、“潜在有害”或“毒性未知”。同时,使用类似的类别在Micromedex中对这些药物和补充剂进行评估,以便比较这两种资源之间的一致性。

结果

Micromedex将85种(68.5%)药物鉴定为一般安全,35种(28.2%)为潜在有害,4种(3.2%)毒性未知。ChatGPT-3.5将89种(71.8%)鉴定为一般安全,11种(8.9%)为潜在有害,24种(19.3%)毒性未知。ChatGPT-4将82种(66.1%)鉴定为一般安全,29种(23.4%)为潜在有害,13种(10.5%)毒性未知。Micromedex和ChatGPT-3.5之间的总体一致性为64.5%,ChatGPT-4的一致性更高,为81.4%。值得注意的是,ChatGPT-3.5的表现欠佳主要是受补充剂中较低的一致性率影响,为60.3%。这种差异可能归因于ChatGPT-3.5中关于补充剂的数据有限,补充剂占被鉴定为未知毒性药物的80%。

结论

与成熟的药物信息资源相比,ChatGPT在评估肾病患者非处方药和补充剂安全性方面的能力有限。目前,ChatGPT-3.5和ChatGPT-4都不能被推荐为这一人群可靠的药物信息来源。研究结果凸显了在医学领域进一步提高该模型准确性和可靠性的必要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/df5b/11025428/8cde175f371f/10.1177_20552076241248082-fig1.jpg

相似文献

1
Evaluating ChatGPT's efficacy in assessing the safety of non-prescription medications and supplements in patients with kidney disease.
Digit Health. 2024 Apr 17;10:20552076241248082. doi: 10.1177/20552076241248082. eCollection 2024 Jan-Dec.
3
Performance and exploration of ChatGPT in medical examination, records and education in Chinese: Pave the way for medical AI.
Int J Med Inform. 2023 Sep;177:105173. doi: 10.1016/j.ijmedinf.2023.105173. Epub 2023 Aug 4.
4
ChatGPT's performance in German OB/GYN exams - paving the way for AI-enhanced medical education and clinical practice.
Front Med (Lausanne). 2023 Dec 13;10:1296615. doi: 10.3389/fmed.2023.1296615. eCollection 2023.
6
ChatGPT versus NASS clinical guidelines for degenerative spondylolisthesis: a comparative analysis.
Eur Spine J. 2024 Nov;33(11):4182-4203. doi: 10.1007/s00586-024-08198-6. Epub 2024 Mar 15.
8
Enhanced Artificial Intelligence Strategies in Renal Oncology: Iterative Optimization and Comparative Analysis of GPT 3.5 Versus 4.0.
Ann Surg Oncol. 2024 Jun;31(6):3887-3893. doi: 10.1245/s10434-024-15107-0. Epub 2024 Mar 12.
10
Are Different Versions of ChatGPT's Ability Comparable to the Clinical Diagnosis Presented in Case Reports? A Descriptive Study.
J Multidiscip Healthc. 2023 Dec 6;16:3825-3831. doi: 10.2147/JMDH.S441790. eCollection 2023.

引用本文的文献

3
Navigating the potential and pitfalls of large language models in patient-centered medication guidance and self-decision support.
Front Med (Lausanne). 2025 Jan 23;12:1527864. doi: 10.3389/fmed.2025.1527864. eCollection 2025.

本文引用的文献

3
Chain of Thought Utilization in Large Language Models and Application in Nephrology.
Medicina (Kaunas). 2024 Jan 13;60(1):148. doi: 10.3390/medicina60010148.
4
Innovating Personalized Nephrology Care: Exploring the Potential Utilization of ChatGPT.
J Pers Med. 2023 Dec 4;13(12):1681. doi: 10.3390/jpm13121681.
5
Feasibility and acceptability of ChatGPT generated radiology report summaries for cancer patients.
Digit Health. 2023 Dec 19;9:20552076231221620. doi: 10.1177/20552076231221620. eCollection 2023 Jan-Dec.
6
Examining the Potential of ChatGPT on Biomedical Information Retrieval: Fact-Checking Drug-Disease Associations.
Ann Biomed Eng. 2024 Aug;52(8):1919-1927. doi: 10.1007/s10439-023-03385-w. Epub 2023 Oct 19.
7
Performance of ChatGPT on Nephrology Test Questions.
Clin J Am Soc Nephrol. 2024 Jan 1;19(1):35-43. doi: 10.2215/CJN.0000000000000330. Epub 2023 Oct 18.
8
ChatGPT fails the test of evidence-based medicine.
Eur Heart J Digit Health. 2023 Jul 13;4(5):366-367. doi: 10.1093/ehjdh/ztad043. eCollection 2023 Oct.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验