Suppr超能文献

ChatGPT 与神经科医生:一项横断面研究,调查多发性硬化症患者对偏好、满意度评分和感知同理心的反应。

ChatGPT vs. neurologists: a cross-sectional study investigating preference, satisfaction ratings and perceived empathy in responses among people living with multiple sclerosis.

机构信息

Department of Advanced Medical and Surgical Sciences, University of Campania "Luigi Vanvitelli", Via Pansini 5, 80131, Naples, Italy.

Department of Molecular Medicine and Medical Biotechnology, Federico II University of Naples, Naples, Italy.

出版信息

J Neurol. 2024 Jul;271(7):4057-4066. doi: 10.1007/s00415-024-12328-x. Epub 2024 Apr 3.

Abstract

BACKGROUND

ChatGPT is an open-source natural language processing software that replies to users' queries. We conducted a cross-sectional study to assess people living with Multiple Sclerosis' (PwMS) preferences, satisfaction, and empathy toward two alternate responses to four frequently-asked questions, one authored by a group of neurologists, the other by ChatGPT.

METHODS

An online form was sent through digital communication platforms. PwMS were blind to the author of each response and were asked to express their preference for each alternate response to the four questions. The overall satisfaction was assessed using a Likert scale (1-5); the Consultation and Relational Empathy scale was employed to assess perceived empathy.

RESULTS

We included 1133 PwMS (age, 45.26 ± 11.50 years; females, 68.49%). ChatGPT's responses showed significantly higher empathy scores (Coeff = 1.38; 95% CI = 0.65, 2.11; p > z < 0.01), when compared with neurologists' responses. No association was found between ChatGPT' responses and mean satisfaction (Coeff = 0.03; 95% CI = - 0.01, 0.07; p = 0.157). College graduate, when compared with high school education responder, had significantly lower likelihood to prefer ChatGPT response (IRR = 0.87; 95% CI = 0.79, 0.95; p < 0.01).

CONCLUSIONS

ChatGPT-authored responses provided higher empathy than neurologists. Although AI holds potential, physicians should prepare to interact with increasingly digitized patients and guide them on responsible AI use. Future development should consider tailoring AIs' responses to individual characteristics. Within the progressive digitalization of the population, ChatGPT could emerge as a helpful support in healthcare management rather than an alternative.

摘要

背景

ChatGPT 是一款开源的自然语言处理软件,它会对用户的查询做出回应。我们进行了一项横断面研究,以评估多发性硬化症患者(PwMS)对四位常见问题的两种替代回答的偏好、满意度和同理心,其中一种回答由一组神经科医生撰写,另一种回答由 ChatGPT 撰写。

方法

通过数字通信平台发送在线表格。PwMS 对每个回答的作者均不知情,并被要求对四个问题的每个替代回答表达自己的偏好。使用李克特量表(1-5)评估整体满意度;采用咨询和关系同理心量表评估感知同理心。

结果

我们纳入了 1133 名 PwMS(年龄 45.26±11.50 岁;女性 68.49%)。与神经科医生的回答相比,ChatGPT 的回答显示出明显更高的同理心得分(系数=1.38;95%置信区间=0.65,2.11;p>z<0.01)。与 ChatGPT 的回答相比,我们没有发现平均满意度之间存在关联(系数=0.03;95%置信区间=-0.01,0.07;p=0.157)。与高中教育程度的回答者相比,大学毕业的回答者更不可能选择 ChatGPT 的回答(IRR=0.87;95%置信区间=0.79,0.95;p<0.01)。

结论

与神经科医生相比,ChatGPT 撰写的回答提供了更高的同理心。尽管人工智能具有潜力,但医生应该准备好与日益数字化的患者互动,并指导他们负责任地使用人工智能。未来的发展应考虑根据个体特征调整人工智能的回答。在人口的逐步数字化过程中,ChatGPT 可能会在医疗保健管理中成为一个有用的支持,而不是一种替代。

相似文献

5
Assessing ChatGPT's Responses to Otolaryngology Patient Questions.评估 ChatGPT 对耳鼻喉科患者问题的回答。
Ann Otol Rhinol Laryngol. 2024 Jul;133(7):658-664. doi: 10.1177/00034894241249621. Epub 2024 Apr 27.

引用本文的文献

本文引用的文献

4
ChatGPT and Physicians' Malpractice Risk.ChatGPT与医生的医疗事故风险。
JAMA Health Forum. 2023 May 5;4(5):e231938. doi: 10.1001/jamahealthforum.2023.1938.
8
Using ChatGPT to write patient clinic letters.使用ChatGPT撰写患者临床信函。
Lancet Digit Health. 2023 Apr;5(4):e179-e181. doi: 10.1016/S2589-7500(23)00048-1. Epub 2023 Mar 7.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验