Abi-Rafeh Jana, Toscano-Rivero Diana, Mazer Bruce D
Translational Research in Respiratory Diseases Program, Meakins-Christie Laboratories, McGill University Health Centre Research Institute, Montréal, Québec, Canada; Division of Experimental Medicine, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada.
Translational Research in Respiratory Diseases Program, Meakins-Christie Laboratories, McGill University Health Centre Research Institute, Montréal, Québec, Canada; Division of Experimental Medicine, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada; Department of Pediatrics, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada.
Ann Allergy Asthma Immunol. 2025 Jul;135(1):87-90. doi: 10.1016/j.anai.2025.04.011. Epub 2025 Apr 24.
BACKGROUND: Oral Immunotherapy (OIT) has exhibited great potential in the treatment of food allergy. However, there is no global consensus on best practices of OIT. Parents of allergic children often struggle with concerns regarding OIT methodology, safety, and lack of accessible educational resources. ChatGPT is a generative artificial intelligence chatbot from OpenAI recognized for its ability to formulate human-like conversations. Although applications of artificial intelligence in medical settings continue to be explored, the effectiveness of ChatGPT as an educational resource remains unknown for OIT. OBJECTIVE: To assess the accuracy of ChatGPT as a self-guided educational resource for parents with children undergoing OIT. METHODS: A total of 14 common questions from parents regarding OIT were entered into ChatGPT version 3.5 and answers were copied verbatim. These responses were then categorized into basic, advanced, or medical, and evaluated by Allergy-Immunology health care practitioners from North America and the United Kingdom using a 10-point Likert scale. Response readability, understandability, and reproducibility were assessed using the Flesch Reading Ease and Flesch-Kincaid Grade level scores, the Patient Education Materials Assessment Tool, and natural language processing tools, respectively. RESULTS: The average median rankings by the practitioners per category were 8.6, 8.4, and 7.8 for basic, advanced, and medical, respectively. ChatGPT responses exhibited low readability scores, corresponding with a high-grade reading level. Understandability was between 73% to 84%, with scores decreased owing to response complexity. When assessing reproducibility, ChatGPT responses achieved rates between 83% and 93%. CONCLUSION: Our results revealed that ChatGPT provides intelligible and comprehensive responses to patient questions. Health care practitioners polled were generally positive but identified important limitations.
Ann Allergy Asthma Immunol. 2025-7
J Pediatr Ophthalmol Strabismus. 2025
Cochrane Database Syst Rev. 2012-9-12
Cochrane Database Syst Rev. 2016-2-12
J Thorac Cardiovasc Surg. 2025-4
JBI Database System Rev Implement Rep. 2016-4