Clinical Biochemistry, Gloucestershire Hospitals NHS Foundation Trust, Cheltenham, UK.
Ann Clin Biochem. 2024 Mar;61(2):143-149. doi: 10.1177/00045632231203473. Epub 2023 Sep 20.
Public awareness of artificial intelligence (AI) is increasing and this novel technology is being used for a range of everyday tasks and more specialist clinical applications. On a background of increasing waits for GP appointments alongside patient access to laboratory test results through the NHS app, this study aimed to assess the accuracy and safety of two AI tools, ChatGPT and Google Bard, in providing interpretation of thyroid function test results as if posed by laboratory scientists or patients.
Fifteen fictional cases were presented to a team of clinicians and clinical scientists to produce a consensus opinion. The cases were then presented to ChatGPT and Google Bard as though from healthcare providers and from patients. The responses were categorized as correct, partially correct or incorrect compared to consensus opinion and the advice assessed for safety to patients.
Of the 15 cases presented, ChatGPT and Google Bard correctly interpreted only 33.3% and 20.0% of cases, respectively. When queries were posed as a patient, 66.7% of ChatGPT responses were safe compared to 60.0% of Google Bard responses. Both AI tools were able to identify primary hypothyroidism and hyperthyroidism but failed to identify subclinical presentations, non-thyroidal illness or secondary hypothyroidism.
This study has demonstrated that AI tools do not currently have the capacity to generate consistently correct interpretation and safe advice to patients and should not be used as an alternative to a consultation with a qualified medical professional. Available AI in its current form cannot replace human clinical knowledge in this scenario.
公众对人工智能(AI)的认识正在提高,这项新技术正被用于各种日常任务和更专业的临床应用。在全科医生预约等待时间增加的背景下,患者可以通过国民保健制度应用程序获得实验室检测结果,本研究旨在评估两个 AI 工具,ChatGPT 和 Google Bard,在提供甲状腺功能检测结果解释方面的准确性和安全性,就像由实验室科学家或患者提出的那样。
15 个虚构的病例被提交给一组临床医生和临床科学家,以达成共识意见。然后,这些病例被提交给 ChatGPT 和 Google Bard,就像来自医疗保健提供者和患者一样。与共识意见相比,将回复分类为正确、部分正确或不正确,并评估对患者的安全性。
在所呈现的 15 个病例中,ChatGPT 和 Google Bard 分别正确解释了 33.3%和 20.0%的病例。当以患者的身份提问时,66.7%的 ChatGPT 回复是安全的,而 60.0%的 Google Bard 回复是安全的。两种 AI 工具都能够识别原发性甲状腺功能减退症和甲状腺功能亢进症,但无法识别亚临床表现、非甲状腺疾病或继发性甲状腺功能减退症。
本研究表明,人工智能工具目前还没有能力生成一致正确的解释和对患者安全的建议,不应作为与合格医疗专业人员咨询的替代。目前形式的人工智能在这种情况下无法替代人类的临床知识。