Yilmaz Muluk Selkin, Olcucu Nazli
Physical Medicine and Rehabilitation, Antalya City Hospital, Antalya, TUR.
Physical Medicine and Rehabilitation, Antalya Ataturk State Hospital, Antalya, TUR.
Cureus. 2024 Jul 1;16(7):e63580. doi: 10.7759/cureus.63580. eCollection 2024 Jul.
Low back pain (LBP) is a prevalent healthcare concern that is frequently responsive to conservative treatment. However, it can also stem from severe conditions, marked by 'red flags' (RF) such as malignancy, cauda equina syndrome, fractures, infections, spondyloarthropathies, and aneurysm rupture, which physicians should be vigilant about. Given the increasing reliance on online health information, this study assessed ChatGPT-3.5's (OpenAI, San Francisco, CA, USA) and GoogleBard's (Google, Mountain View, CA, USA) accuracy in responding to RF-related LBP questions and their capacity to discriminate the severity of the condition.
We created 70 questions on RF-related symptoms and diseases following the LBP guidelines. Among them, 58 had a single symptom (SS), and 12 had multiple symptoms (MS) of LBP. Questions were posed to ChatGPT and GoogleBard, and responses were assessed by two authors for accuracy, completeness, and relevance (ACR) using a 5-point rubric criteria.
Cohen's kappa values (0.60-0.81) indicated significant agreement among the authors. The average scores for responses ranged from 3.47 to 3.85 for ChatGPT-3.5 and from 3.36 to 3.76 for GoogleBard for 58 SS questions, and from 4.04 to 4.29 for ChatGPT-3.5 and from 3.50 to 3.71 for GoogleBard for 12 MS questions. The ratings for these responses ranged from 'good' to 'excellent'. Most SS responses effectively conveyed the severity of the situation (93.1% for ChatGPT-3.5, 94.8% for GoogleBard), and all MS responses did so. No statistically significant differences were found between ChatGPT-3.5 and GoogleBard scores (p>0.05).
In an era characterized by widespread online health information seeking, artificial intelligence (AI) systems play a vital role in delivering precise medical information. These technologies may hold promise in the field of health information if they continue to improve.
腰痛(LBP)是一个普遍存在的医疗问题,通常对保守治疗有反应。然而,它也可能源于严重疾病,以“红旗征”(RF)为标志,如恶性肿瘤、马尾综合征、骨折、感染、脊柱关节病和动脉瘤破裂,医生对此应保持警惕。鉴于对在线健康信息的依赖日益增加,本研究评估了ChatGPT-3.5(美国加利福尼亚州旧金山OpenAI公司)和谷歌巴德(美国加利福尼亚州山景城谷歌公司)在回答与RF相关的腰痛问题时的准确性及其区分病情严重程度的能力。
我们根据腰痛指南创建了70个关于RF相关症状和疾病的问题。其中,58个问题有单一症状(SS),12个问题有多症状(MS)的腰痛。向ChatGPT和谷歌巴德提出问题,由两位作者使用5分制评分标准对回答的准确性、完整性和相关性(ACR)进行评估。
科恩kappa值(0.60 - 0.81)表明作者之间存在显著一致性。对于58个SS问题,ChatGPT-3.5的回答平均得分在3.47至3.85之间,谷歌巴德的回答平均得分在3.36至3.76之间;对于12个MS问题,ChatGPT-3.5的回答平均得分在4.04至4.29之间,谷歌巴德的回答平均得分在3.50至3.71之间。这些回答的评分从“良好”到“优秀”不等。大多数SS回答有效地传达了病情的严重程度(ChatGPT-3.5为93.1%,谷歌巴德为94.8%),所有MS回答均如此。ChatGPT-3.5和谷歌巴德的得分之间未发现统计学显著差异(p>0.05)。
在一个以广泛寻求在线健康信息为特征的时代,人工智能(AI)系统在提供精确的医疗信息方面发挥着至关重要的作用。如果这些技术持续改进,它们在健康信息领域可能会有前景。