Bitterman Jason, D'Angelo Alexander, Holachek Alexandra, Eubanks James E
Division of Physical Medicine and Rehabilitation, Hartford Healthcare Medical Group, Hartford, Connecticut, USA.
Nebraska Medicine Department of Physical Medicine and Rehabilitation, University of Nebraska Medical Center, Omaha, Nebraska, USA.
PM R. 2025 May 2. doi: 10.1002/pmrj.13386.
There have been significant advances in machine learning and artificial intelligence technology over the past few years, leading to the release of large language models (LLMs) such as ChatGPT. There are many potential applications for LLMs in health care, but it is critical to first determine how accurate LLMs are before putting them into practice. No studies have evaluated the accuracy and precision of LLMs in responding to questions related to the field of physical medicine and rehabilitation (PM&R).
To determine the accuracy and precision of two OpenAI LLMs (GPT-3.5, released in November 2022, and GPT-4o, released in May 2024) in answering questions related to PM&R knowledge.
Cross-sectional study. Both LLMs were tested on the same 744 PM&R knowledge questions that covered all aspects of the field (general rehabilitation, stroke, traumatic brain injury, spinal cord injury, musculoskeletal medicine, pain medicine, electrodiagnostic medicine, pediatric rehabilitation, prosthetics and orthotics, rheumatology, and pharmacology). Each LLM was tested three times on the same question set to assess for precision.
N/A.
N/A.
N/A.
Percentage of correctly answered questions.
For three runs of the 744-question set, GPT-3.5 answered 56.3%, 56.5%, and 56.9% of the questions correctly. For three runs of the same question set, GPT-4o answered 83.6%, 84%, and 84.1% of the questions correctly. GPT-4o outperformed GPT-3.5 in all subcategories of PM&R questions.
LLM technology is rapidly advancing, with the more recent GPT-4o model performing much better on PM&R knowledge questions compared to GPT-3.5. There is potential for LLMs in augmenting clinical practice, medical training, and patient education. However, the technology has limitations and physicians should remain cautious in using it in practice at this time.
在过去几年中,机器学习和人工智能技术取得了重大进展,催生了如ChatGPT这样的大语言模型(LLMs)。大语言模型在医疗保健领域有许多潜在应用,但在将其付诸实践之前,首先确定其准确性至关重要。尚无研究评估大语言模型在回答与物理医学与康复(PM&R)领域相关问题时的准确性和精确性。
确定两个OpenAI大语言模型(2022年11月发布的GPT-3.5和2024年5月发布的GPT-4o)在回答与PM&R知识相关问题时的准确性和精确性。
横断面研究。两个大语言模型均针对涵盖该领域各个方面(一般康复、中风、创伤性脑损伤、脊髓损伤、肌肉骨骼医学、疼痛医学、电诊断医学、儿科康复、假肢与矫形器、风湿病学和药理学)的744个PM&R知识问题进行测试。每个大语言模型在同一问题集上测试三次以评估精确性。
无。
无。
无。
正确回答问题的百分比。
对于744个问题集的三次测试,GPT-3.5正确回答了56.3%、56.5%和56.9%的问题。对于同一问题集的三次测试,GPT-4o正确回答了83.6%、84%和84.1%的问题。在PM&R问题的所有子类别中,GPT-4o的表现均优于GPT-3.5。
大语言模型技术正在迅速发展,与GPT-3.5相比,更新的GPT-4o模型在PM&R知识问题上表现得好得多。大语言模型在增强临床实践、医学培训和患者教育方面具有潜力。然而,该技术存在局限性,目前医生在实践中使用时应保持谨慎。