Gondode Prakash, Duggal Sakshi, Garg Neha, Lohakare Pooja, Jakhar Jubin, Bharti Swati, Dewangan Shraddha
Department of Anaesthesiology, Pain medicine and Critical Care, All India Institute of Medical Sciences (AIIMS), New Delhi, India.
Department of Microbiology, Mahatma Gandhi Institute of Medical Sciences (MGIMS), Wardha, India.
Br Ir Orthopt J. 2024 Aug 19;20(1):183-192. doi: 10.22599/bioj.377. eCollection 2024.
BACKGROUND AND AIM: Eye surgeries often evoke strong negative emotions in patients, including fear and anxiety. Patient education material plays a crucial role in informing and empowering individuals. Traditional sources of medical information may not effectively address individual patient concerns or cater to varying levels of understanding. This study aims to conduct a comparative analysis of the accuracy, completeness, readability, tone, and understandability of patient education material generated by AI chatbots versus traditional Patient Information Leaflets (PILs), focusing on local anesthesia in eye surgery. METHODS: Expert reviewers evaluated responses generated by AI chatbots (ChatGPT and Google Gemini) and a traditional PIL (Royal College of Anaesthetists' PIL) based on accuracy, completeness, readability, sentiment, and understandability. Statistical analyses, including ANOVA and Tukey HSD tests, were conducted to compare the performance of the sources. RESULTS: Readability analysis showed variations in complexity among the sources, with AI chatbots offering simplified language and PILs maintaining better overall readability and accessibility. Sentiment analysis revealed differences in emotional tone, with Google Gemini exhibiting the most positive sentiment. AI chatbots demonstrated superior understandability and actionability, while PILs excelled in completeness. Overall, ChatGPT showed slightly higher accuracy (scores expressed as mean ± standard deviation) (4.71 ± 0.5 vs 4.61 ± 0.62) and completeness (4.55 ± 0.58 vs 4.47 ± 0.58) compared to Google Gemini, but PILs performed best (4.84 ± 0.37 vs 4.88 ± 0.33) in terms of both accuracy and completeness (p-value for completeness <0.05). CONCLUSION: AI chatbots show promise as innovative tools for patient education, complementing traditional PILs. By leveraging the strengths of both AI-driven technologies and human expertise, healthcare providers can enhance patient education and empower individuals to make informed decisions about their health and medical care.
背景与目的:眼科手术常常会在患者中引发强烈的负面情绪,包括恐惧和焦虑。患者教育材料在为个体提供信息和增强其能力方面起着至关重要的作用。传统的医学信息来源可能无法有效地解决个体患者的担忧,也无法满足不同的理解水平。本研究旨在对人工智能聊天机器人生成的患者教育材料与传统患者信息手册(PIL)在准确性、完整性、可读性、语气和可理解性方面进行比较分析,重点关注眼科手术中的局部麻醉。 方法:专家评审员根据准确性、完整性、可读性、情感倾向和可理解性,对人工智能聊天机器人(ChatGPT和谷歌Gemini)以及一份传统患者信息手册(皇家麻醉师学院的患者信息手册)生成的回复进行评估。进行了包括方差分析和Tukey HSD检验在内的统计分析,以比较各信息来源的表现。 结果:可读性分析显示各信息来源在复杂性上存在差异,人工智能聊天机器人提供简化语言,而患者信息手册整体可读性和易获取性更好。情感分析揭示了情感基调的差异,谷歌Gemini表现出最积极的情感倾向。人工智能聊天机器人在可理解性和可操作性方面表现出色,而患者信息手册在完整性方面更胜一筹。总体而言,与谷歌Gemini相比,ChatGPT在准确性(得分表示为均值±标准差)(4.71±0.5对4.61±0.62)和完整性(4.55±0.58对4.47±0.58)方面略高,但患者信息手册在准确性和完整性方面表现最佳(4.84±0.37对4.88±0.33)(完整性的p值<0.05)。 结论:人工智能聊天机器人作为患者教育的创新工具显示出潜力,可补充传统的患者信息手册。通过利用人工智能驱动技术和人类专业知识的优势,医疗保健提供者可以加强患者教育,并使个体能够就其健康和医疗护理做出明智的决策。
Int J Med Inform. 2024-11
J Med Internet Res. 2024-8-14
Indian J Anaesth. 2024-12
Eur Arch Otorhinolaryngol. 2024-2
JMIR Med Educ. 2023-11-1
Front Psychol. 2023-9-20
Psychol Res Behav Manag. 2021-6-18
BMJ Open Ophthalmol. 2021-1-13
Wiley Interdiscip Rev Data Min Knowl Discov. 2019
J Am Med Inform Assoc. 2019-4-1