• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能生成的以及医生对健康相关问题的回答。

AI-generated and doctors' answers to health-related questions.

作者信息

Mork Tiril Egset, Mjøs Håkon Garnes, Nilsen Harald Giskegjerde, Kjelsrud Sindre, Lundervold Alexander Selvikvåg, Lundervold Arvid, Jammer Ib

机构信息

Det medisinske fakultet, Universitetet i Bergen.

Høgskulen på Vestlandet.

出版信息

Tidsskr Nor Laegeforen. 2025 Feb 10;145(2). doi: 10.4045/tidsskr.24.0402. Print 2025 Feb 11.

DOI:10.4045/tidsskr.24.0402
PMID:39932080
Abstract

BACKGROUND

Several studies have investigated how large language models answer health-related questions. In a study from 2023, responses to health-related questions in English generated by the language model GPT-3.5 were perceived as more empathetic and informative than responses from doctors. We wanted to apply the newer language model GPT-4 in Norwegian to investigate how respondents with a healthcare background rated responses to health-related questions from doctors and those generated by the language model.

MATERIAL AND METHOD

A total of 192 health-related questions with corresponding answers from doctors were sourced from the website Studenterspør.no. The language model GPT-4 was used to generate a new set of answers to the same questions. Both sets of answers were evaluated by 344 respondents with a background in health care. The respondents, who were blinded to whether the answer was generated by a doctor or the language model, were asked to rate the empathy, quality of information and helpfulness of the answers.

RESULTS

The survey consisted of 344 respondents and 192 questions. The average number of evaluations per answer was 5.7. There was a significant difference between doctors' answers and those generated by GPT-4 in terms of perceived empathy (p < 0.001), quality of information (p < 0.001) and helpfulness (p < 0.001).

INTERPRETATION

The answers generated by GPT-4 were rated as more empathetic, informative and helpful than the answers from doctors. This suggests that AI could serve as an aid to healthcare personnel by drafting good responses to health-related questions.

摘要

背景

多项研究调查了大语言模型如何回答与健康相关的问题。在一项2023年的研究中,语言模型GPT - 3.5生成的英文健康相关问题的回答被认为比医生的回答更具同理心和信息量。我们希望在挪威语环境中应用更新的语言模型GPT - 4,以调查具有医疗保健背景的受访者如何评价医生和语言模型针对健康相关问题给出的回答。

材料与方法

从网站Studenter.spør.no上获取了总共192个与健康相关的问题以及医生给出的相应答案。使用语言模型GPT - 4对相同问题生成一组新的答案。两组答案由344名具有医疗保健背景的受访者进行评估。受访者不知道答案是由医生还是语言模型生成的,他们被要求对答案的同理心、信息质量和有用性进行评分。

结果

该调查包括344名受访者和192个问题。每个答案的平均评估次数为5.7次。在感知到的同理心(p < 0.001)、信息质量(p < 0.001)和有用性(p < 0.001)方面,医生的答案与GPT - 4生成的答案之间存在显著差异。

解读

GPT - 4生成的答案在同理心、信息量和有用性方面的评分高于医生的答案。这表明人工智能可以通过起草针对健康相关问题的良好回答,为医护人员提供帮助。

相似文献

1
AI-generated and doctors' answers to health-related questions.人工智能生成的以及医生对健康相关问题的回答。
Tidsskr Nor Laegeforen. 2025 Feb 10;145(2). doi: 10.4045/tidsskr.24.0402. Print 2025 Feb 11.
2
Quality of Answers of Generative Large Language Models Versus Peer Users for Interpreting Laboratory Test Results for Lay Patients: Evaluation Study.生成式大语言模型与同行用户对解释非专业患者实验室检测结果的答案质量比较:评估研究。
J Med Internet Res. 2024 Apr 17;26:e56655. doi: 10.2196/56655.
3
Assessing the Role of the Generative Pretrained Transformer (GPT) in Alzheimer's Disease Management: Comparative Study of Neurologist- and Artificial Intelligence-Generated Responses.评估生成式预训练转换器(GPT)在阿尔茨海默病管理中的作用:神经科医生和人工智能生成的回复的对比研究。
J Med Internet Res. 2024 Oct 31;26:e51095. doi: 10.2196/51095.
4
Is the information provided by large language models valid in educating patients about adolescent idiopathic scoliosis? An evaluation of content, clarity, and empathy : The perspective of the European Spine Study Group.大语言模型提供的信息在对患者进行青少年特发性脊柱侧凸教育方面是否有效?内容、清晰度和同理心的评估:欧洲脊柱研究小组的观点
Spine Deform. 2025 Mar;13(2):361-372. doi: 10.1007/s43390-024-00955-3. Epub 2024 Nov 4.
5
Quality of Answers of Generative Large Language Models vs Peer Patients for Interpreting Lab Test Results for Lay Patients: Evaluation Study.生成式大语言模型与同侪患者为非专业患者解读实验室检查结果的答案质量:评估研究
ArXiv. 2024 Jan 23:arXiv:2402.01693v1.
6
Artificial Intelligence in Childcare: Assessing the Performance and Acceptance of ChatGPT Responses.人工智能在儿童保育中的应用:评估ChatGPT回复的性能与可接受性
Cureus. 2023 Aug 31;15(8):e44484. doi: 10.7759/cureus.44484. eCollection 2023 Aug.
7
Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum.比较医生和人工智能聊天机器人对发布在公共社交媒体论坛上的患者问题的回复。
JAMA Intern Med. 2023 Jun 1;183(6):589-596. doi: 10.1001/jamainternmed.2023.1838.
8
"Doctor ChatGPT, Can You Help Me?" The Patient's Perspective: Cross-Sectional Study.“医生 ChatGPT,你能帮我吗?”患者视角:横断面研究。
J Med Internet Res. 2024 Oct 1;26:e58831. doi: 10.2196/58831.
9
Comparison of Ophthalmologist and Large Language Model Chatbot Responses to Online Patient Eye Care Questions.眼科医生与大型语言模型聊天机器人对在线患者眼部护理问题的回复比较。
JAMA Netw Open. 2023 Aug 1;6(8):e2330320. doi: 10.1001/jamanetworkopen.2023.30320.
10
Physician and Artificial Intelligence Chatbot Responses to Cancer Questions From Social Media.医生与人工智能聊天机器人对社交媒体上癌症问题的回复。
JAMA Oncol. 2024 Jul 1;10(7):956-960. doi: 10.1001/jamaoncol.2024.0836.

引用本文的文献

1
GPT-4's capabilities for formative and summative assessments in Norwegian medicine exams-an intrinsic case study in the early phase of intervention.GPT-4在挪威医学考试中的形成性和总结性评估能力——干预早期阶段的一项内在案例研究。
Front Med (Lausanne). 2025 Apr 10;12:1441747. doi: 10.3389/fmed.2025.1441747. eCollection 2025.