Dergaa Ismail, Fekih-Romdhane Feten, Hallit Souheil, Loch Alexandre Andrade, Glenn Jordan M, Fessi Mohamed Saifeddin, Ben Aissa Mohamed, Souissi Nizar, Guelmami Noomen, Swed Sarya, El Omri Abdelfatteh, Bragazzi Nicola Luigi, Ben Saad Helmi
Primary Health Care Corporation (PHCC), Doha, Qatar.
Research Unit Physical Activity, Sport, and Health, UR18JS01, National Observatory of Sport, Tunis, Tunisia.
Front Psychiatry. 2024 Jan 4;14:1277756. doi: 10.3389/fpsyt.2023.1277756. eCollection 2023.
Psychiatry is a specialized field of medicine that focuses on the diagnosis, treatment, and prevention of mental health disorders. With advancements in technology and the rise of artificial intelligence (AI), there has been a growing interest in exploring the potential of AI language models systems, such as Chat Generative Pre-training Transformer (ChatGPT), to assist in the field of psychiatry.
Our study aimed to evaluates the effectiveness, reliability and safeness of ChatGPT in assisting patients with mental health problems, and to assess its potential as a collaborative tool for mental health professionals through a simulated interaction with three distinct imaginary patients.
Three imaginary patient scenarios (cases A, B, and C) were created, representing different mental health problems. All three patients present with, and seek to eliminate, the same chief complaint (i.e., difficulty falling asleep and waking up frequently during the night in the last 2°weeks). ChatGPT was engaged as a virtual psychiatric assistant to provide responses and treatment recommendations.
In case A, the recommendations were relatively appropriate (albeit non-specific), and could potentially be beneficial for both users and clinicians. However, as complexity of clinical cases increased (cases B and C), the information and recommendations generated by ChatGPT became inappropriate, even dangerous; and the limitations of the program became more glaring. The main strengths of ChatGPT lie in its ability to provide quick responses to user queries and to simulate empathy. One notable limitation is ChatGPT inability to interact with users to collect further information relevant to the diagnosis and management of a patient's clinical condition. Another serious limitation is ChatGPT inability to use critical thinking and clinical judgment to drive patient's management.
As for July 2023, ChatGPT failed to give the simple medical advice given certain clinical scenarios. This supports that the quality of ChatGPT-generated content is still far from being a guide for users and professionals to provide accurate mental health information. It remains, therefore, premature to conclude on the usefulness and safety of ChatGPT in mental health practice.
精神病学是医学的一个专业领域,专注于心理健康障碍的诊断、治疗和预防。随着技术的进步和人工智能(AI)的兴起,人们越来越有兴趣探索AI语言模型系统,如聊天生成预训练变换器(ChatGPT),在精神病学领域的应用潜力。
我们的研究旨在评估ChatGPT在协助有心理健康问题的患者方面的有效性、可靠性和安全性,并通过与三名不同的虚拟患者进行模拟互动,评估其作为心理健康专业人员协作工具的潜力。
创建了三个虚拟患者场景(病例A、B和C),代表不同的心理健康问题。所有三名患者都表现出相同的主要诉求(即过去两周内难以入睡且夜间频繁醒来)并寻求消除该诉求。ChatGPT作为虚拟精神科助手提供回应和治疗建议。
在病例A中,建议相对合适(尽管不够具体),对用户和临床医生都可能有益。然而,随着临床病例复杂性的增加(病例B和C),ChatGPT生成的信息和建议变得不合适,甚至危险;该程序的局限性也更加明显。ChatGPT的主要优势在于能够快速回复用户查询并模拟同理心。一个显著的局限性是ChatGPT无法与用户互动以收集与患者临床状况的诊断和管理相关的进一步信息。另一个严重局限性是ChatGPT无法运用批判性思维和临床判断力来指导患者管理。
截至2023年7月,ChatGPT在某些临床场景下未能给出简单的医疗建议。这表明ChatGPT生成内容的质量仍远不能作为用户和专业人员提供准确心理健康信息的指南。因此,就ChatGPT在心理健康实践中的实用性和安全性得出结论还为时过早。