Patel Akash, Ajumobi Adewale
Internal Medicine, Eisenhower Health, Rancho Mirage, USA.
Internal Medicine, Riverside Community Hospital, Riverside, USA.
Cureus. 2025 Jun 21;17(6):e86512. doi: 10.7759/cureus.86512. eCollection 2025 Jun.
The integration of artificial intelligence (AI) in healthcare is a growing area of interest. This study aims to evaluate the reliability of OpenAI's ChatGPT-4.0 in providing pre-colonoscopy patient guidance, a critical aspect of gastrointestinal care where patient misconceptions and non-compliance are common challenges.
The study employed a qualitative design to assess ChatGPT-4.0 against established clinical guidelines from various medical societies. Twenty-five patient-like queries encompassing dietary recommendations, bowel preparation, cardiovascular medications, antibiotic prophylaxis, and diabetes medications management were presented to ChatGPT-4.0. The AI's responses were independently evaluated and classified in terms of their alignment with the guidelines.
ChatGPT-4 demonstrated high accuracy, with all 25 sample queries' responses aligning with the established clinical guidelines. It provided precise guidance on dietary restrictions, medication management, and bowel preparation in accordance with the European Society of Gastrointestinal Endoscopy (ESGE), the U.S. Multi-Society Task Force on Colorectal Cancer (USMSTF), the American College of Gastroenterology-Canadian Association of Gastroenterology (ACG-CAG), the American College of Cardiology-American Heart Association (ACC-AHA), the American Society for Gastrointestinal Endoscopy (ASGE), and the Australian Diabetes Society (ADS).
The high degree of guideline adherence by ChatGPT-4.0 underscores its viability as a dependable resource for patient education. Despite its promising results, the study acknowledges limitations such as the structured nature of patient queries and the lack of real patient interactions. The findings suggest a potential role for AI in augmenting patient education and standardizing information dissemination in healthcare.
人工智能(AI)在医疗保健领域的整合是一个日益受到关注的领域。本研究旨在评估OpenAI的ChatGPT-4.0在提供结肠镜检查前患者指导方面的可靠性,这是胃肠护理的一个关键方面,患者的误解和不依从是常见的挑战。
本研究采用定性设计,根据各种医学协会既定的临床指南对ChatGPT-4.0进行评估。向ChatGPT-4.0提出了25个类似患者的问题,涵盖饮食建议、肠道准备、心血管药物、抗生素预防和糖尿病药物管理。人工智能的回答根据与指南的一致性进行独立评估和分类。
ChatGPT-4表现出高度的准确性,所有25个样本问题的回答均与既定的临床指南一致。它根据欧洲胃肠内镜学会(ESGE)、美国结直肠癌多学会特别工作组(USMSTF)、美国胃肠病学会 - 加拿大胃肠病学会(ACG-CAG)、美国心脏病学会 - 美国心脏协会(ACC-AHA)、美国胃肠内镜学会(ASGE)和澳大利亚糖尿病协会(ADS)的指南,提供了关于饮食限制、药物管理和肠道准备的精确指导。
ChatGPT-4.0对指南的高度遵守突出了其作为患者教育可靠资源的可行性。尽管取得了令人鼓舞的结果,但该研究承认存在局限性,如患者问题的结构化性质以及缺乏真实的患者互动。研究结果表明人工智能在加强患者教育和规范医疗保健信息传播方面具有潜在作用。