• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

ChatGPT能提供有关儿童发烧的高质量信息吗?

Can ChatGPT provide quality information about fever in children?

作者信息

Kacer Emine Ozdemir, Ipekten Funda

机构信息

Department of Pediatrics, Aksaray University, Faculty of Medicine, Aksaray, Turkey.

Department of Biostatics, Erciyes University, Faculty of Medicine, Kayseri, Turkey.

出版信息

J Paediatr Child Health. 2025 Jan;61(1):60-65. doi: 10.1111/jpc.16710. Epub 2024 Oct 29.

DOI:10.1111/jpc.16710
PMID:39470298
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11701195/
Abstract

BACKGROUND

Artificial intelligence (AI) systems hold great promise in improving medical care and health problems.

AIM

We aimed to evaluate the answers by asking the most frequently asked questions to ChatGPT for the prediction and treatment of fever, which is a major problem in children.

METHODS

The 50 questions most frequently asked about fever in children were determined, and we asked them to ChatGPT. We evaluated the responses using the quality and readability scales.

RESULTS

While ChatGPT demonstrated good quality in its responses, the readability scale and the Patient Education Material Evaluation Tool (PEMAT) scale used with materials appearing on the site were also found to be successful. Among the scales in which we evaluated ChatGPT responses, a weak positive relationship was found between Gunning Fog (GFOG) and Simple Measure of Gobbledygook (SMOG) scores (r = 0.379) and a significant and positive relationship was found between FGL and SMOG scores (r = 0.899).

CONCLUSION

This study sheds light on the quality and readability of information regarding the presentation of AI tools, such as ChatGPT, regarding fever, a common complaint in children. We determined that the answers to the most frequently asked questions about fire were high-quality, reliable, easy to read and understandable.

摘要

背景

人工智能(AI)系统在改善医疗保健和健康问题方面具有巨大潜力。

目的

我们旨在通过向ChatGPT询问有关发热预测和治疗的最常见问题来评估其答案,发热是儿童的一个主要问题。

方法

确定了关于儿童发热最常被问到的50个问题,并向ChatGPT提问。我们使用质量和可读性量表评估其回答。

结果

虽然ChatGPT的回答质量良好,但用于评估网站上出现的材料的可读性量表和患者教育材料评估工具(PEMAT)量表也很成功。在我们评估ChatGPT回答的量表中,冈宁雾度(GFOG)和简化晦涩度测量(SMOG)分数之间存在弱正相关(r = 0.379),Flesch–Kincaid年级水平(FGL)和SMOG分数之间存在显著正相关(r = 0.899)。

结论

本研究揭示了关于人工智能工具(如ChatGPT)提供的有关儿童常见病症发热的信息的质量和可读性。我们确定,关于发热最常被问到的问题的答案质量高、可靠、易于阅读和理解。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0078/11701195/bfcc44669d8d/JPC-61-60-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0078/11701195/bfcc44669d8d/JPC-61-60-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0078/11701195/bfcc44669d8d/JPC-61-60-g001.jpg

相似文献

1
Can ChatGPT provide quality information about fever in children?ChatGPT能提供有关儿童发烧的高质量信息吗?
J Paediatr Child Health. 2025 Jan;61(1):60-65. doi: 10.1111/jpc.16710. Epub 2024 Oct 29.
2
Evaluating the readability, quality, and reliability of responses generated by ChatGPT, Gemini, and Perplexity on the most commonly asked questions about Ankylosing spondylitis.评估ChatGPT、Gemini和Perplexity针对强直性脊柱炎最常见问题生成的回答的可读性、质量和可靠性。
PLoS One. 2025 Jun 18;20(6):e0326351. doi: 10.1371/journal.pone.0326351. eCollection 2025.
3
Is Information About Musculoskeletal Malignancies From Large Language Models or Web Resources at a Suitable Reading Level for Patients?来自大语言模型或网络资源的关于肌肉骨骼恶性肿瘤的信息对患者来说是否处于合适的阅读水平?
Clin Orthop Relat Res. 2025 Feb 1;483(2):306-315. doi: 10.1097/CORR.0000000000003263. Epub 2024 Sep 25.
4
Artificial intelligence performance in pediatric asthma.人工智能在小儿哮喘中的表现
J Asthma. 2025 Aug 1:1-7. doi: 10.1080/02770903.2025.2531500.
5
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
6
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.在基层医疗机构或医院门诊环境中,如果患者出现以下症状和体征,可判断其是否患有 COVID-19。
Cochrane Database Syst Rev. 2022 May 20;5(5):CD013665. doi: 10.1002/14651858.CD013665.pub3.
7
Information about labor epidural analgesia: an updated evaluation on the readability, accuracy, and quality of ChatGPT responses incorporating patient preferences and complex clinical scenarios.关于分娩硬膜外镇痛的信息:对结合患者偏好和复杂临床场景的ChatGPT回复的可读性、准确性和质量的最新评估。
Int J Obstet Anesth. 2025 Aug;63:104688. doi: 10.1016/j.ijoa.2025.104688. Epub 2025 May 20.
8
American Academy of Orthopaedic Surgeons OrthoInfo provides more readable information regarding rotator cuff injury than ChatGPT.美国矫形外科医师学会的OrthoInfo提供了比ChatGPT更具可读性的关于肩袖损伤的信息。
J ISAKOS. 2025 Feb 12;12:100841. doi: 10.1016/j.jisako.2025.100841.
9
Using Artificial Intelligence ChatGPT to Access Medical Information About Chemical Eye Injuries: Comparative Study.使用人工智能ChatGPT获取有关化学性眼外伤的医学信息:比较研究
JMIR Form Res. 2025 Aug 13;9:e73642. doi: 10.2196/73642.
10
Evaluation of ChatGPT-4 as an Online Outpatient Assistant in Puerperal Mastitis Management: Content Analysis of an Observational Study.评估ChatGPT-4作为产褥期乳腺炎管理在线门诊助手的效果:一项观察性研究的内容分析
JMIR Med Inform. 2025 Jul 24;13:e68980. doi: 10.2196/68980.

引用本文的文献

1
Leveraging ChatGPT to strengthen pediatric healthcare systems: a systematic review.利用ChatGPT加强儿科医疗系统:一项系统综述
Eur J Pediatr. 2025 Jul 12;184(8):478. doi: 10.1007/s00431-025-06320-4.
2
Evaluating AI-based breastfeeding chatbots: quality, readability, and reliability analysis.评估基于人工智能的母乳喂养聊天机器人:质量、可读性和可靠性分析。
PLoS One. 2025 Mar 17;20(3):e0319782. doi: 10.1371/journal.pone.0319782. eCollection 2025.

本文引用的文献

1
ChatGPT's Ability to Assess Quality and Readability of Online Medical Information: Evidence From a Cross-Sectional Study.ChatGPT评估在线医学信息质量和可读性的能力:一项横断面研究的证据。
Cureus. 2023 Jul 20;15(7):e42214. doi: 10.7759/cureus.42214. eCollection 2023 Jul.
2
Evaluating GPT as an Adjunct for Radiologic Decision Making: GPT-4 Versus GPT-3.5 in a Breast Imaging Pilot.评估 GPT 作为放射学决策辅助工具:GPT-4 与 GPT-3.5 在乳腺成像试点中的比较。
J Am Coll Radiol. 2023 Oct;20(10):990-997. doi: 10.1016/j.jacr.2023.05.003. Epub 2023 Jun 21.
3
ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health.
ChatGPT 和大型语言模型的兴起:公共卫生领域新的 AI 驱动的信息疫情威胁。
Front Public Health. 2023 Apr 25;11:1166120. doi: 10.3389/fpubh.2023.1166120. eCollection 2023.
4
What if your patient switches from Dr. Google to Dr. ChatGPT? A vignette-based survey of the trustworthiness, value, and danger of ChatGPT-generated responses to health questions.如果你的患者从谷歌医生转向了 ChatGPT 医生,你会怎么办?基于病例的调查,评估 ChatGPT 生成的健康问题回答的可信度、价值和危险。
Eur J Cardiovasc Nurs. 2024 Jan 12;23(1):95-98. doi: 10.1093/eurjcn/zvad038.
5
Peer Review: A Process Primed for Quality Improvement?同行评审:一种旨在提高质量的过程?
J Urol. 2023 Jun;209(6):1069-1070. doi: 10.1097/JU.0000000000003460. Epub 2023 Apr 12.
6
ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns.ChatGPT在医学教育、研究与实践中的应用:对其前景与合理担忧的系统评价
Healthcare (Basel). 2023 Mar 19;11(6):887. doi: 10.3390/healthcare11060887.
7
Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma.评估 ChatGPT 在回答肝硬化和肝细胞癌相关问题方面的表现。
Clin Mol Hepatol. 2023 Jul;29(3):721-732. doi: 10.3350/cmh.2023.0089. Epub 2023 Mar 22.
8
Using ChatGPT to evaluate cancer myths and misconceptions: artificial intelligence and cancer information.利用 ChatGPT 评估癌症谣言和误解:人工智能与癌症信息。
JNCI Cancer Spectr. 2023 Mar 1;7(2). doi: 10.1093/jncics/pkad015.
9
Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models.ChatGPT在美国医师执照考试中的表现:使用大语言模型进行人工智能辅助医学教育的潜力。
PLOS Digit Health. 2023 Feb 9;2(2):e0000198. doi: 10.1371/journal.pdig.0000198. eCollection 2023 Feb.
10
Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study.ChatGPT 的知识和解释能力与韩国医学生在寄生虫学考试中的表现相当吗?一项描述性研究。
J Educ Eval Health Prof. 2023;20:1. doi: 10.3352/jeehp.2023.20.1. Epub 2023 Jan 11.