Suppr超能文献

咨询数字医生:ChatGPT-3.5在回答糖尿病足溃疡护理相关问题方面的有效性。

Consulting the Digital Doctor: Efficacy of ChatGPT-3.5 in Answering Questions Related to Diabetic Foot Ulcer Care.

作者信息

Rohrich Rachel N, Li Karen R, Lava Christian X, Snee Isabel, Alahmadi Sami, Youn Richard C, Steinberg John S, Atves Jayson M, Attinger Christopher E, Evans Karen K

机构信息

Department of Plastic and Reconstructive Surgery, MedStar Georgetown University Hospital, Washington DC.

Georgetown University School of Medicine, Washington DC.

出版信息

Adv Skin Wound Care. 2025 Oct 1;38(9):E74-E80. doi: 10.1097/ASW.0000000000000317. Epub 2025 Jun 18.

Abstract

BACKGROUND

Diabetic foot ulcer (DFU) care is a challenge in reconstructive surgery. Artificial intelligence (AI) tools represent a new resource for patients with DFUs to seek information.

OBJECTIVE

To evaluate the efficacy of ChatGPT-3.5 in responding to frequently asked questions related to DFU care.

METHODS

Researchers posed 11 DFU care questions to ChatGPT-3.5 in December 2023. Questions were divided into topic categories of wound care, concerning symptoms, and surgical management. Four plastic surgeons in the authors' wound care department evaluated responses on a 10-point Likert-type scale for accuracy, comprehensiveness, and danger, in addition to providing qualitative feedback. Readability was assessed using 10 readability indexes.

RESULTS

ChatGPT-3.5 answered questions with a mean accuracy of 8.7±0.3, comprehensiveness of 8.0±0.7, and danger of 2.2±0.6. ChatGPT-3.5 answered at the mean grade level of 11.9±1.8. Physician reviewers complimented the simplicity of the responses (n=11/11) and the AI's ability to provide general information (n=4/11). Three responses presented incorrect information, and the majority of responses (n=10/11) left out key information, such as deep vein thrombosis symptoms and comorbid conditions impacting limb salvage.

CONCLUSIONS

The researchers observed that ChatGPT-3.5 provided misinformation, omitted crucial details, and responded at nearly 4 grade levels higher than the American average. However, ChatGPT-3.5 was sufficient in its ability to provide general information, which may enable patients with DFUs to make more informed decisions and better engage in their care. Physicians must proactively address the potential benefits and limitations of AI.

摘要

背景

糖尿病足溃疡(DFU)护理是重建手术中的一项挑战。人工智能(AI)工具为DFU患者寻求信息提供了一种新资源。

目的

评估ChatGPT-3.5回答与DFU护理相关常见问题的效果。

方法

2023年12月,研究人员向ChatGPT-3.5提出了11个DFU护理问题。问题分为伤口护理、相关症状和手术管理等主题类别。作者所在伤口护理部门的四位整形外科医生除了提供定性反馈外,还根据10分制李克特量表对回答的准确性、全面性和危险性进行了评估。使用10种可读性指标评估可读性。

结果

ChatGPT-3.5回答问题的平均准确率为8.7±0.3,全面性为8.0±0.7,危险性为2.2±0.6。ChatGPT-3.5的回答平均年级水平为11.9±1.8。医生评审人员称赞回答的简洁性(11/11)以及人工智能提供一般信息的能力(4/11)。三个回答提供了错误信息,大多数回答(10/11)遗漏了关键信息,如深静脉血栓形成症状和影响肢体挽救的合并症。

结论

研究人员观察到,ChatGPT-3.5提供了错误信息,遗漏了关键细节,回答的年级水平比美国平均水平高近4级。然而,ChatGPT-3.5提供一般信息的能力足够,这可能使DFU患者能够做出更明智的决策并更好地参与自身护理。医生必须积极应对人工智能的潜在益处和局限性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验