Obradovich Nick, Khalsa Sahib S, Khan Waqas, Suh Jina, Perlis Roy H, Ajilore Olusola, Paulus Martin P
Laureate Institute for Brain Research, Tulsa, Oklahoma, USA.
Oxley College of Health and Natural Sciences, University of Tulsa, Tulsa, Oklahoma, USA.
NPP Digit Psychiatry Neurosci. 2024;2(1). doi: 10.1038/s44277-024-00010-z. Epub 2024 May 24.
The integration of Large Language Models (LLMs) into mental healthcare and research heralds a potentially transformative shift, one offering enhanced access to care, efficient data collection, and innovative therapeutic tools. This paper reviews the development, function, and burgeoning use of LLMs in psychiatry, highlighting their potential to enhance mental healthcare through improved diagnostic accuracy, personalized care, and streamlined administrative processes. It is also acknowledged that LLMs introduce challenges related to computational demands, potential for misinterpretation, and ethical concerns, necessitating the development of pragmatic frameworks to ensure their safe deployment. We explore both the promise of LLMs in enriching psychiatric care and research through examples such as predictive analytics and therapy chatbots and risks including labor substitution, privacy concerns, and the necessity for responsible AI practices. We conclude by advocating for processes to develop responsible guardrails, including red teaming, multi-stakeholder oriented safety, and ethical guidelines/frameworks, to mitigate risks and harness the full potential of LLMs for advancing mental health.
将大语言模型(LLMs)整合到心理保健和研究中预示着一场潜在的变革性转变,这种转变提供了更多的就医途径、高效的数据收集和创新的治疗工具。本文回顾了大语言模型在精神病学领域的发展、功能和新兴应用,强调了它们通过提高诊断准确性、个性化护理和简化行政流程来增强心理保健的潜力。同时也认识到,大语言模型带来了与计算需求、误解可能性和伦理问题相关的挑战,因此需要制定务实的框架以确保其安全部署。我们通过预测分析和治疗聊天机器人等例子探讨了大语言模型在丰富精神病护理和研究方面的前景,以及包括劳动力替代、隐私问题和负责任的人工智能实践必要性等风险。我们最后倡导制定负责任的保障措施的流程,包括红队测试、多利益相关方导向的安全措施和伦理准则/框架,以降低风险并充分发挥大语言模型在推进心理健康方面的全部潜力。