• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

学生和专家对ChatGPT在儿科听力学方面表现的评价

Performance of ChatGPT in Pediatric Audiology as Rated by Students and Experts.

作者信息

Ratuszniak Anna, Gos Elzbieta, Lorens Artur, Skarzynski Piotr Henryk, Skarzynski Henryk, Jedrzejczak W Wiktor

机构信息

Institute of Physiology and Pathology of Hearing, Mochnackiego 10 Street, 02-042 Warsaw, Poland.

World Hearing Center, Mokra 17 Street, 05-830 Kajetany, Poland.

出版信息

J Clin Med. 2025 Jan 28;14(3):875. doi: 10.3390/jcm14030875.

DOI:10.3390/jcm14030875
PMID:39941547
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11818674/
Abstract

: Despite the growing popularity of artificial intelligence (AI)-based systems such as ChatGPT, there is still little evidence of their effectiveness in audiology, particularly in pediatric audiology. The present study aimed to verify the performance of ChatGPT in this field, as assessed by both students and professionals, and to compare its Polish and English versions. : ChatGPT was presented with 20 questions, which were posed twice, first in Polish and then in English. A group of 20 students and 16 professionals in the field of audiology and otolaryngology rated the answers on a Likert scale of 1 to 5 in terms of correctness, relevance, completeness, and linguistic accuracy. Both groups were also asked to assess the usefulness of ChatGPT as a source of information for patients, in educational settings for students, and in professional work. : Both students and professionals generally rated ChatGPT's responses to be satisfactory. For most of the questions, ChatGPT's responses were rated somewhat higher by the students than the professionals, although statistically significant differences were only evident for completeness and linguistic accuracy. Those who rated ChatGPT's responses more highly also rated its usefulness more highly. : ChatGPT can possibly be used for quick information retrieval, especially by non-experts, but it lacks the depth and reliability required by professionals. The different ratings given by students and professionals, and its language dependency, indicate it works best as a supplementary tool, not as a replacement for verifiable sources, particularly in a healthcare setting.

摘要

尽管基于人工智能(AI)的系统如ChatGPT越来越受欢迎,但在听力学领域,尤其是儿科听力学中,几乎没有证据表明它们的有效性。本研究旨在验证ChatGPT在该领域的表现,由学生和专业人士进行评估,并比较其波兰语和英语版本。向ChatGPT提出了20个问题,这些问题分两次提出,先使用波兰语,然后使用英语。一组20名听力学和耳鼻喉科领域的学生和16名专业人士根据答案的正确性、相关性、完整性和语言准确性,在1至5的李克特量表上对答案进行评分。两组人员还被要求评估ChatGPT作为患者信息来源、学生教育环境以及专业工作中的信息来源的有用性。学生和专业人士普遍认为ChatGPT的回答令人满意。对于大多数问题,学生对ChatGPT回答的评分略高于专业人士,尽管在完整性和语言准确性方面仅存在统计学上的显著差异。那些对ChatGPT回答评价更高的人也对其有用性评价更高。ChatGPT可能可用于快速信息检索,尤其是非专家,但它缺乏专业人士所需的深度和可靠性。学生和专业人士给出的不同评分及其语言依赖性表明,它最适合作为一种辅助工具,而不是可验证来源的替代品,尤其是在医疗环境中。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae5/11818674/50ab4a4455ec/jcm-14-00875-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae5/11818674/dbee2d6ef30d/jcm-14-00875-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae5/11818674/50ab4a4455ec/jcm-14-00875-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae5/11818674/dbee2d6ef30d/jcm-14-00875-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae5/11818674/50ab4a4455ec/jcm-14-00875-g002.jpg

相似文献

1
Performance of ChatGPT in Pediatric Audiology as Rated by Students and Experts.学生和专家对ChatGPT在儿科听力学方面表现的评价
J Clin Med. 2025 Jan 28;14(3):875. doi: 10.3390/jcm14030875.
2
ChatGPT for Tinnitus Information and Support: Response Accuracy and Retest after Three and Six Months.用于耳鸣信息与支持的ChatGPT:三个月和六个月后的回答准确性及重新测试
Brain Sci. 2024 May 7;14(5):465. doi: 10.3390/brainsci14050465.
3
ChatGPT's performance in German OB/GYN exams - paving the way for AI-enhanced medical education and clinical practice.ChatGPT在德国妇产科考试中的表现——为人工智能强化医学教育和临床实践铺平道路。
Front Med (Lausanne). 2023 Dec 13;10:1296615. doi: 10.3389/fmed.2023.1296615. eCollection 2023.
4
Performance of ChatGPT on the Chinese Postgraduate Examination for Clinical Medicine: Survey Study.ChatGPT 在临床医学研究生入学考试中的表现:调查研究。
JMIR Med Educ. 2024 Feb 9;10:e48514. doi: 10.2196/48514.
5
Screening/diagnosis of pediatric endocrine disorders through the artificial intelligence model in different language settings.通过人工智能模型在不同语言环境下对儿科内分泌紊乱进行筛查/诊断。
Eur J Pediatr. 2024 Jun;183(6):2655-2661. doi: 10.1007/s00431-024-05527-1. Epub 2024 Mar 19.
6
Assessing question characteristic influences on ChatGPT's performance and response-explanation consistency: Insights from Taiwan's Nursing Licensing Exam.评估问题特征对 ChatGPT 表现和回应解释一致性的影响:来自台湾护理执照考试的见解。
Int J Nurs Stud. 2024 May;153:104717. doi: 10.1016/j.ijnurstu.2024.104717. Epub 2024 Feb 8.
7
"Doctor ChatGPT, Can You Help Me?" The Patient's Perspective: Cross-Sectional Study.“医生 ChatGPT,你能帮我吗?”患者视角:横断面研究。
J Med Internet Res. 2024 Oct 1;26:e58831. doi: 10.2196/58831.
8
Artificial intelligence large language model ChatGPT: is it a trustworthy and reliable source of information for sarcoma patients?人工智能大语言模型 ChatGPT:它是肉瘤患者值得信赖和可靠的信息来源吗?
Front Public Health. 2024 Mar 22;12:1303319. doi: 10.3389/fpubh.2024.1303319. eCollection 2024.
9
ChatGPT Versus Consultants: Blinded Evaluation on Answering Otorhinolaryngology Case-Based Questions.ChatGPT与医学顾问的对比:对耳鼻喉科基于病例问题回答的盲法评估
JMIR Med Educ. 2023 Dec 5;9:e49183. doi: 10.2196/49183.
10
An assessment of ChatGPT's responses to frequently asked questions about cervical and breast cancer.评估 ChatGPT 对宫颈癌和乳腺癌常见问题的回答。
BMC Womens Health. 2024 Sep 2;24(1):482. doi: 10.1186/s12905-024-03320-8.

本文引用的文献

1
Assessment of Large Language Models (LLMs) in decision-making support for gynecologic oncology.大语言模型在妇科肿瘤决策支持中的评估
Comput Struct Biotechnol J. 2024 Oct 31;23:4019-4026. doi: 10.1016/j.csbj.2024.10.050. eCollection 2024 Dec.
2
The evaluation of the performance of ChatGPT in the management of labor analgesia.评估 ChatGPT 在分娩镇痛管理中的性能。
J Clin Anesth. 2024 Nov;98:111582. doi: 10.1016/j.jclinane.2024.111582. Epub 2024 Aug 20.
3
Accuracy and Repeatability of ChatGPT Based on a Set of Multiple-Choice Questions on Objective Tests of Hearing.
基于一组听力客观测试多项选择题的ChatGPT的准确性和可重复性。
Cureus. 2024 May 8;16(5):e59857. doi: 10.7759/cureus.59857. eCollection 2024 May.
4
ChatGPT for Tinnitus Information and Support: Response Accuracy and Retest after Three and Six Months.用于耳鸣信息与支持的ChatGPT:三个月和六个月后的回答准确性及重新测试
Brain Sci. 2024 May 7;14(5):465. doi: 10.3390/brainsci14050465.
5
Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis.幻觉发生率和 ChatGPT 与 Bard 用于系统评价的参考准确性:比较分析。
J Med Internet Res. 2024 May 22;26:e53164. doi: 10.2196/53164.
6
Comparison of the Audiological Knowledge of Three Chatbots: ChatGPT, Bing Chat, and Bard.三款聊天机器人的听力学知识比较:ChatGPT、必应聊天和巴德
Audiol Neurootol. 2024;29(6):457-463. doi: 10.1159/000538983. Epub 2024 May 6.
7
Exploring the Performance of ChatGPT-4 in the Taiwan Audiologist Qualification Examination: Preliminary Observational Study Highlighting the Potential of AI Chatbots in Hearing Care.探索 ChatGPT-4 在台湾听力学家资格考试中的表现:初步观察性研究强调 AI 聊天机器人在听力保健中的潜力。
JMIR Med Educ. 2024 Apr 26;10:e55595. doi: 10.2196/55595.
8
Physician Versus Large Language Model Chatbot Responses to Web-Based Questions From Autistic Patients in Chinese: Cross-Sectional Comparative Analysis.中文自闭症患者网络问诊中,医生与大型语言模型聊天机器人回复的对比分析:横断面研究。
J Med Internet Res. 2024 Apr 30;26:e54706. doi: 10.2196/54706.
9
Performance of a Large Language Model in the Generation of Clinical Guidelines for Antibiotic Prophylaxis in Spine Surgery.大型语言模型在生成脊柱手术抗生素预防临床指南方面的表现。
Neurospine. 2024 Mar;21(1):128-146. doi: 10.14245/ns.2347310.655. Epub 2024 Mar 31.
10
ChatGPT vs. neurologists: a cross-sectional study investigating preference, satisfaction ratings and perceived empathy in responses among people living with multiple sclerosis.ChatGPT 与神经科医生:一项横断面研究,调查多发性硬化症患者对偏好、满意度评分和感知同理心的反应。
J Neurol. 2024 Jul;271(7):4057-4066. doi: 10.1007/s00415-024-12328-x. Epub 2024 Apr 3.