• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

大型语言模型 ChatGPT(GPT4)在耳鼻喉科三级科作为各种医生的患者信息来源的有效性。

Validity of the large language model ChatGPT (GPT4) as a patient information source in otolaryngology by a variety of doctors in a tertiary otorhinolaryngology department.

机构信息

Department of Otorhinolaryngology-Head and Neck Surgery and Audiology, Copenhagen University Hospital, Rigshospitalet, Copenhagen, Denmark.

出版信息

Acta Otolaryngol. 2023 Sep;143(9):779-782. doi: 10.1080/00016489.2023.2254809. Epub 2023 Sep 11.

DOI:10.1080/00016489.2023.2254809
PMID:37694729
Abstract

BACKGROUND

A high number of patients seek health information online, and large language models (LLMs) may produce a rising amount of it.

AIM

This study evaluates the performance regarding health information provided by ChatGPT, a LLM developed by OpenAI, focusing on its utility as a source for otolaryngology-related patient information.

MATERIAL AND METHOD

A variety of doctors from a tertiary otorhinolaryngology department used a Likert scale to assess the chatbot's responses in terms of accuracy, relevance, and depth. The responses were also evaluated by ChatGPT.

RESULTS

The composite mean of the three categories was 3.41, with the highest performance noted in the relevance category (mean = 3.71) when evaluated by the respondents. The accuracy and depth categories yielded mean scores of 3.51 and 3.00, respectively. All the categories were rated as 5 when evaluated by ChatGPT.

CONCLUSION AND SIGNIFICANCE

Despite its potential in providing relevant and accurate medical information, the chatbot's responses lacked depth and were found to potentially perpetuate biases due to its training on publicly available text. In conclusion, while LLMs show promise in healthcare, further refinement is necessary to enhance response depth and mitigate potential biases.

摘要

背景

大量患者在线寻求健康信息,大型语言模型(LLM)可能会产生越来越多的信息。

目的

本研究评估了由 OpenAI 开发的 LLM ChatGPT 提供的健康信息的性能,重点关注其作为耳鼻喉科相关患者信息来源的效用。

材料与方法

来自三级耳鼻喉科的各种医生使用李克特量表评估聊天机器人在准确性、相关性和深度方面的反应。ChatGPT 还对回复进行了评估。

结果

三个类别的综合平均值为 3.41,受访者评估时相关性类别表现最佳(平均值为 3.71)。准确性和深度类别的平均得分为 3.51 和 3.00。ChatGPT 评估时所有类别均评为 5。

结论和意义

尽管聊天机器人有提供相关且准确的医疗信息的潜力,但由于其在公开文本上的训练,其回复缺乏深度,并且可能存在潜在的偏见。总之,虽然大型语言模型在医疗保健领域显示出前景,但需要进一步改进以增强响应深度并减轻潜在偏见。

相似文献

1
Validity of the large language model ChatGPT (GPT4) as a patient information source in otolaryngology by a variety of doctors in a tertiary otorhinolaryngology department.大型语言模型 ChatGPT(GPT4)在耳鼻喉科三级科作为各种医生的患者信息来源的有效性。
Acta Otolaryngol. 2023 Sep;143(9):779-782. doi: 10.1080/00016489.2023.2254809. Epub 2023 Sep 11.
2
A Novel Evaluation Model for Assessing ChatGPT on Otolaryngology-Head and Neck Surgery Certification Examinations: Performance Study.一种评估 ChatGPT 在耳鼻喉头颈外科认证考试中表现的新评价模型:性能研究。
JMIR Med Educ. 2024 Jan 16;10:e49970. doi: 10.2196/49970.
3
Triage Performance Across Large Language Models, ChatGPT, and Untrained Doctors in Emergency Medicine: Comparative Study.分诊表现比较:大型语言模型、ChatGPT 和未经训练的急诊医生:一项对比研究。
J Med Internet Res. 2024 Jun 14;26:e53297. doi: 10.2196/53297.
4
ChatGPT Versus Consultants: Blinded Evaluation on Answering Otorhinolaryngology Case-Based Questions.ChatGPT与医学顾问的对比:对耳鼻喉科基于病例问题回答的盲法评估
JMIR Med Educ. 2023 Dec 5;9:e49183. doi: 10.2196/49183.
5
Benchmarking large language models' performances for myopia care: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, and Google Bard.比较分析 ChatGPT-3.5、ChatGPT-4.0 和谷歌巴德在近视防控方面的表现:大型语言模型的基准测试。
EBioMedicine. 2023 Sep;95:104770. doi: 10.1016/j.ebiom.2023.104770. Epub 2023 Aug 23.
6
Physician Versus Large Language Model Chatbot Responses to Web-Based Questions From Autistic Patients in Chinese: Cross-Sectional Comparative Analysis.中文自闭症患者网络问诊中,医生与大型语言模型聊天机器人回复的对比分析:横断面研究。
J Med Internet Res. 2024 Apr 30;26:e54706. doi: 10.2196/54706.
7
Assessing unknown potential-quality and limitations of different large language models in the field of otorhinolaryngology.评估耳鼻喉科领域不同大型语言模型的未知潜在质量和局限性。
Acta Otolaryngol. 2024 Mar;144(3):237-242. doi: 10.1080/00016489.2024.2352843. Epub 2024 May 23.
8
Comparative Performance of ChatGPT 3.5 and GPT4 on Rhinology Standardized Board Examination Questions.ChatGPT 3.5与GPT4在鼻科学标准化委员会考试问题上的比较表现
OTO Open. 2024 Jun 27;8(2):e164. doi: 10.1002/oto2.164. eCollection 2024 Apr-Jun.
9
Can large language models pass official high-grade exams of the European Society of Neuroradiology courses? A direct comparison between OpenAI chatGPT 3.5, OpenAI GPT4 and Google Bard.大语言模型能否通过欧洲神经放射学会课程的官方高级考试?OpenAI chatGPT 3.5、OpenAI GPT4与谷歌巴德的直接比较。
Neuroradiology. 2024 Aug;66(8):1245-1250. doi: 10.1007/s00234-024-03371-6. Epub 2024 May 6.
10
The Comparative Diagnostic Capability of Large Language Models in Otolaryngology.大语言模型在耳鼻喉科的比较诊断能力
Laryngoscope. 2024 Sep;134(9):3997-4002. doi: 10.1002/lary.31434. Epub 2024 Apr 2.

引用本文的文献

1
Assessing ChatGPT's Educational Potential in Lung Cancer Radiotherapy From Clinician and Patient Perspectives: Content Quality and Readability Analysis.从临床医生和患者角度评估ChatGPT在肺癌放疗中的教育潜力:内容质量与可读性分析
JMIR Cancer. 2025 Aug 13;11:e69783. doi: 10.2196/69783.
2
Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and Ethics.用于为患者、护理人员和普通公众提供通俗易懂的医学信息的生成式人工智能/大型语言模型:机遇、风险与伦理
Patient Prefer Adherence. 2025 Jul 31;19:2227-2249. doi: 10.2147/PPA.S527922. eCollection 2025.
3
Applications of Natural Language Processing in Otolaryngology: A Scoping Review.
自然语言处理在耳鼻咽喉科的应用:一项范围综述
Laryngoscope. 2025 Sep;135(9):3049-3063. doi: 10.1002/lary.32198. Epub 2025 May 1.
4
Enhancing patient-centered information on implant dentistry through prompt engineering: a comparison of four large language models.通过提示工程增强种植牙科以患者为中心的信息:四种大语言模型的比较
Front Oral Health. 2025 Apr 7;6:1566221. doi: 10.3389/froh.2025.1566221. eCollection 2025.
5
Performance of ChatGPT in Pediatric Audiology as Rated by Students and Experts.学生和专家对ChatGPT在儿科听力学方面表现的评价
J Clin Med. 2025 Jan 28;14(3):875. doi: 10.3390/jcm14030875.
6
Large Language Models for Chatbot Health Advice Studies: A Systematic Review.用于聊天机器人健康建议研究的大语言模型:一项系统综述。
JAMA Netw Open. 2025 Feb 3;8(2):e2457879. doi: 10.1001/jamanetworkopen.2024.57879.
7
Current applications and challenges in large language models for patient care: a systematic review.用于患者护理的大语言模型的当前应用与挑战:一项系统综述
Commun Med (Lond). 2025 Jan 21;5(1):26. doi: 10.1038/s43856-024-00717-2.
8
Patient- and clinician-based evaluation of large language models for patient education in prostate cancer radiotherapy.基于患者和临床医生的大语言模型在前列腺癌放疗患者教育中的评估
Strahlenther Onkol. 2025 Mar;201(3):333-342. doi: 10.1007/s00066-024-02342-3. Epub 2025 Jan 10.
9
Enhancing Multilingual Patient Education: ChatGPT's Accuracy and Readability for SSNHL Queries in English and Spanish.加强多语言患者教育:ChatGPT对英文和西班牙文突发性聋查询的准确性和可读性
OTO Open. 2024 Dec 11;8(4):e70048. doi: 10.1002/oto2.70048. eCollection 2024 Oct-Dec.
10
Assessing the accuracy and reproducibility of ChatGPT for responding to patient inquiries about otosclerosis.评估ChatGPT回答患者关于耳硬化症询问的准确性和可重复性。
Eur Arch Otorhinolaryngol. 2025 Mar;282(3):1567-1575. doi: 10.1007/s00405-024-09039-4. Epub 2024 Oct 26.