文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

使用人工智能ChatGPT获取有关化学性眼外伤的医学信息:比较研究

Using Artificial Intelligence ChatGPT to Access Medical Information About Chemical Eye Injuries: Comparative Study.

作者信息

Alharbi Layan Yousef, Alrashoud Rema Rashed, Alotaibi Bader Shabib, Al Dera Abdulaziz Meshal, Alajlan Raghad Saleh, AlHuthail Reem Rashed, Alessa Dalal Ibrahim

机构信息

College of Medicine, Imam Mohammad Ibn Saud Islamic University (IMSIU), Prince Mohammed Ibn Salman Ibn Abdulaziz Road, Riyadh, 13318, Saudi Arabia, 966 532816087.

Department of Ophthalmology, College Of Medicine, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia.

出版信息

JMIR Form Res. 2025 Aug 13;9:e73642. doi: 10.2196/73642.


DOI:10.2196/73642
PMID:40802972
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12349736/
Abstract

BACKGROUND: Chemical ocular injuries are a major public health issue. They cause eye damage from harmful chemicals and can lead to severe vision loss or blindness if not treated promptly and effectively. Although medical knowledge has advanced, accessing reliable and understandable information on these injuries remains a challenge. This is due to unverified web-based content and complex terminology. Artificial intelligence tools like ChatGPT provide a promising solution by simplifying medical information and making it more accessible to the general public. OBJECTIVE: This study aims to assess the use of ChatGPT in providing reliable, accurate, and accessible medical information on chemical ocular injuries. It evaluates the correctness, thematic accuracy, and coherence of ChatGPT's responses compared with established medical guidelines and explores its potential for patient education. METHODS: A total of 9 questions were entered into ChatGPT regarding various aspects of chemical ocular injuries. These included the definition, prevalence, etiology, prevention, symptoms, diagnosis, treatment, follow-up, and complications. The responses provided by ChatGPT were compared with the International Classification of Diseases-9 and International Classification of Diseases-10 guidelines for chemical (alkali and acid) injuries of the conjunctiva and cornea. The evaluation focused on criteria such as correctness, thematic accuracy, and coherence to assess the accuracy of ChatGPT's responses. The inputs were categorized into 3 distinct groups, and statistical analyses, including Flesch-Kincaid readability tests, ANOVA, and trend analysis, were conducted to assess their readability, complexity, and trends. RESULTS: The results showed that ChatGPT provided accurate and coherent responses for most questions about chemical ocular injuries, demonstrating thematic relevance. However, the responses sometimes overlooked critical clinical details or guideline-specific elements, such as emphasizing the urgency of care, using precise classification systems, and addressing detailed diagnostic or management protocols. While the answers were generally valid, they occasionally included less relevant or overly generalized information. This reduced their consistency with established medical guidelines. The average Flesch Reading Ease Score was 33.84 (SD 2.97), indicating a fairly challenging reading level, while the Flesch-Kincaid Grade Level averaged 14.21 (SD 0.97), suitable for readers with college-level proficiency. The passive voice was used in 7.22% (SD 5.60%) of sentences, indicating moderate reliance. Statistical analysis showed no significant differences in the Flesch Reading Ease Score (P=.38), Flesch-Kincaid Grade Level (P=.55), or passive sentence use (P=.60) across categories, as determined by one-way ANOVA. Readability remained relatively constant across the 3 categories, as determined by trend analysis. CONCLUSIONS: ChatGPT shows strong potential in providing accurate and relevant information about chemical ocular injuries. However, its language complexity may prevent accessibility for individuals with lower health literacy and sometimes miss critical aspects. Future improvements should focus on enhancing readability, increasing context-specific accuracy, and tailoring responses to a person's needs and literacy levels.

摘要

背景:化学性眼外伤是一个重大的公共卫生问题。它们由有害化学物质导致眼部损伤,如果不及时有效治疗,可能会导致严重的视力丧失或失明。尽管医学知识有所进步,但获取关于这些损伤的可靠且易懂的信息仍然是一项挑战。这是由于基于网络的内容未经证实以及术语复杂。像ChatGPT这样的人工智能工具通过简化医学信息并使其更易于普通大众获取,提供了一个有前景的解决方案。 目的:本研究旨在评估ChatGPT在提供关于化学性眼外伤的可靠、准确且易于获取的医学信息方面的应用。它将ChatGPT的回答与既定的医学指南进行比较,评估其正确性、主题准确性和连贯性,并探索其在患者教育方面的潜力。 方法:就化学性眼外伤的各个方面向ChatGPT输入了总共9个问题。这些问题包括定义、患病率、病因、预防、症状诊断、治疗、随访和并发症。将ChatGPT提供的回答与《国际疾病分类 - 9》和《国际疾病分类 - 10》中关于结膜和角膜化学性(碱和酸)损伤的指南进行比较。评估集中在正确性、主题准确性和连贯性等标准上,以评估ChatGPT回答的准确性。输入被分为3个不同的组,并进行了统计分析,包括弗莱什 - 金凯德可读性测试、方差分析和趋势分析,以评估其可读性、复杂性和趋势。 结果:结果表明,ChatGPT对大多数关于化学性眼外伤的问题提供了准确且连贯的回答,显示出主题相关性。然而,回答有时会忽略关键的临床细节或特定于指南的要素,例如强调护理的紧迫性、使用精确的分类系统以及涉及详细的诊断或管理方案。虽然答案总体上是有效的,但它们偶尔会包含不太相关或过于笼统的信息。这降低了它们与既定医学指南的一致性。平均弗莱什阅读简易度得分为33.84(标准差2.97),表明阅读水平颇具挑战性,而弗莱什 - 金凯德年级水平平均为14.21(标准差0.97),适合具有大学水平能力的读者。7.22%(标准差5.60%)的句子使用了被动语态,表明有一定程度的依赖。单因素方差分析确定,不同类别在弗莱什阅读简易度得分(P = 0.38)、弗莱什 - 金凯德年级水平(P = 0.55)或被动句使用(P = 0.60)方面没有显著差异。趋势分析确定,3个类别之间的可读性保持相对稳定。 结论:ChatGPT在提供关于化学性眼外伤的准确和相关信息方面显示出强大的潜力。然而,其语言复杂性可能会使健康素养较低的人难以获取信息,并且有时会遗漏关键方面。未来的改进应侧重于提高可读性;提高特定背景下的准确性;并根据个人需求和素养水平调整回答。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/a4fa9a8ba207/formative-v9-e73642-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/0e896be8d208/formative-v9-e73642-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/5e6df0b9d86c/formative-v9-e73642-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/b279c22f6512/formative-v9-e73642-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/402e39092172/formative-v9-e73642-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/1c450850162e/formative-v9-e73642-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/8441fdde9b48/formative-v9-e73642-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/d749815f3b17/formative-v9-e73642-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/0eb09d76fcb1/formative-v9-e73642-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/3097c75c3239/formative-v9-e73642-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/ab75ec3740db/formative-v9-e73642-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/bb77c72b7015/formative-v9-e73642-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/2214966c7557/formative-v9-e73642-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/9deff446b176/formative-v9-e73642-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/082bd8360c0c/formative-v9-e73642-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/c0723551f8a4/formative-v9-e73642-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/a4fa9a8ba207/formative-v9-e73642-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/0e896be8d208/formative-v9-e73642-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/5e6df0b9d86c/formative-v9-e73642-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/b279c22f6512/formative-v9-e73642-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/402e39092172/formative-v9-e73642-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/1c450850162e/formative-v9-e73642-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/8441fdde9b48/formative-v9-e73642-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/d749815f3b17/formative-v9-e73642-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/0eb09d76fcb1/formative-v9-e73642-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/3097c75c3239/formative-v9-e73642-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/ab75ec3740db/formative-v9-e73642-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/bb77c72b7015/formative-v9-e73642-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/2214966c7557/formative-v9-e73642-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/9deff446b176/formative-v9-e73642-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/082bd8360c0c/formative-v9-e73642-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/c0723551f8a4/formative-v9-e73642-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a70/12349736/a4fa9a8ba207/formative-v9-e73642-g016.jpg

相似文献

[1]
Using Artificial Intelligence ChatGPT to Access Medical Information About Chemical Eye Injuries: Comparative Study.

JMIR Form Res. 2025-8-13

[2]
Prescription of Controlled Substances: Benefits and Risks

2025-1

[3]
Evaluating ChatGPT's Utility in Biologic Therapy for Systemic Lupus Erythematosus: Comparative Study of ChatGPT and Google Web Search.

JMIR Form Res. 2025-8-28

[4]
Assessing ChatGPT's Educational Potential in Lung Cancer Radiotherapy From Clinician and Patient Perspectives: Content Quality and Readability Analysis.

JMIR Cancer. 2025-8-13

[5]
Evaluation of ChatGPT-4 as an Online Outpatient Assistant in Puerperal Mastitis Management: Content Analysis of an Observational Study.

JMIR Med Inform. 2025-7-24

[6]
Artificial Intelligence in Peripheral Artery Disease Education: A Battle Between ChatGPT and Google Gemini.

Cureus. 2025-6-1

[7]
American Academy of Orthopaedic Surgeons OrthoInfo provides more readable information regarding rotator cuff injury than ChatGPT.

J ISAKOS. 2025-2-12

[8]
Can Artificial Intelligence Improve the Readability of Patient Education Materials?

Clin Orthop Relat Res. 2023-11-1

[9]
Is Information About Musculoskeletal Malignancies From Large Language Models or Web Resources at a Suitable Reading Level for Patients?

Clin Orthop Relat Res. 2025-2-1

[10]
Evaluation of Information Provided by ChatGPT Versions on Traumatic Dental Injuries for Dental Students and Professionals.

Dent Traumatol. 2025-8

本文引用的文献

[1]
Large Language Models May Help Patients Understand Peer-Reviewed Scientific Articles About Ophthalmology: Development and Usability Study.

J Med Internet Res. 2024-12-24

[2]
ChatGPT's Role in Improving Education Among Patients Seeking Emergency Medical Treatment.

West J Emerg Med. 2024-9

[3]
Evaluation of the accuracy and readability of ChatGPT-4 and Google Gemini in providing information on retinal detachment: a multicenter expert comparative study.

Int J Retina Vitreous. 2024-9-2

[4]
Performance of ChatGPT Across Different Versions in Medical Licensing Examinations Worldwide: Systematic Review and Meta-Analysis.

J Med Internet Res. 2024-7-25

[5]
Embracing ChatGPT for Medical Education: Exploring Its Impact on Doctors and Medical Students.

JMIR Med Educ. 2024-4-10

[6]
ChatGPT applications in medical, dental, pharmacy, and public health education: A descriptive study highlighting the advantages and limitations.

Narra J. 2023-4

[7]
Reliability and accuracy of artificial intelligence ChatGPT in providing information on ophthalmic diseases and management to patients.

Eye (Lond). 2024-5

[8]
Can ChatGPT Aid Clinicians in Educating Patients on the Surgical Management of Glaucoma?

J Glaucoma. 2024-2-1

[9]
A SWOT (Strengths, Weaknesses, Opportunities, and Threats) Analysis of ChatGPT in the Medical Literature: Concise Review.

J Med Internet Res. 2023-11-16

[10]
Large language models in medicine.

Nat Med. 2023-8

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索