Nasra Mohamed, Jaffri Rimsha, Pavlin-Premrl Davor, Kok Hong Kuan, Khabaza Ali, Barras Christen, Slater Lee-Anne, Yazdabadi Anousha, Moore Justin, Russell Jeremy, Smith Paul, Chandra Ronil V, Brooks Mark, Jhamb Ashu, Chong Winston, Maingard Julian, Asadi Hamed
Department of Medicine, Northern Health, Melbourne, Victoria, Australia.
Melbourne Medical School, The University of Melbourne, Melbourne, Victoria, Australia.
Intern Med J. 2025 Jan;55(1):20-34. doi: 10.1111/imj.16607. Epub 2024 Dec 25.
Enhancing patient comprehension of their health is crucial in improving health outcomes. The integration of artificial intelligence (AI) in distilling medical information into a conversational, legible format can potentially enhance health literacy. This review aims to examine the accuracy, reliability, comprehensiveness and readability of medical patient education materials (PEMs) simplified by AI models. A systematic review was conducted searching for articles assessing outcomes of use of AI in simplifying PEMs. Inclusion criteria are as follows: publication between January 2019 and June 2023, various modalities of AI, English language, AI use in PEMs and including physicians and/or patients. An inductive thematic approach was utilised to code for unifying topics which were qualitatively analysed. Twenty studies were included, and seven themes were identified (reproducibility, accessibility and ease of use, emotional support and user satisfaction, readability, data security, accuracy and reliability and comprehensiveness). AI effectively simplified PEMs, with reproducibility rates up to 90.7% in specific domains. User satisfaction exceeded 85% in AI-generated materials. AI models showed promising readability improvements, with ChatGPT achieving 100% post-simplification readability scores. AI's performance in accuracy and reliability was mixed, with occasional lack of comprehensiveness and inaccuracies, particularly when addressing complex medical topics. AI models accurately simplified basic tasks but lacked soft skills and personalisation. These limitations can be addressed with higher-calibre models combined with prompt engineering. In conclusion, the literature reveals a scope for AI to enhance patient health literacy through medical PEMs. Further refinement is needed to improve AI's accuracy and reliability, especially when simplifying complex medical information.
提高患者对自身健康的理解对于改善健康结果至关重要。将人工智能(AI)整合到将医学信息提炼成对话式、易懂格式的过程中,有可能提高健康素养。本综述旨在研究由人工智能模型简化的医学患者教育材料(PEM)的准确性、可靠性、全面性和可读性。进行了一项系统综述,搜索评估人工智能在简化PEM方面应用效果的文章。纳入标准如下:2019年1月至2023年6月期间发表的文章、各种人工智能模式、英语语言、人工智能在PEM中的应用以及包括医生和/或患者。采用归纳主题法对统一的主题进行编码,并进行定性分析。纳入了20项研究,确定了七个主题(可重复性、可及性和易用性、情感支持和用户满意度、可读性、数据安全、准确性和可靠性以及全面性)。人工智能有效地简化了PEM,在特定领域的可重复率高达90.7%。人工智能生成材料的用户满意度超过85%。人工智能模型在可读性方面有显著提高,ChatGPT简化后的可读性得分达到100%。人工智能在准确性和可靠性方面的表现参差不齐,偶尔缺乏全面性和准确性,尤其是在处理复杂医学主题时。人工智能模型准确地简化了基本任务,但缺乏软技能和个性化。通过更高质量的模型结合提示工程可以解决这些局限性。总之,文献表明人工智能有通过医学PEM提高患者健康素养的空间。需要进一步改进以提高人工智能的准确性和可靠性,特别是在简化复杂医学信息时。