Malgaroli Matteo, Schultebraucks Katharina, Myrick Keris Jan, Andrade Loch Alexandre, Ospina-Pinillos Laura, Choudhury Tanzeem, Kotov Roman, De Choudhury Munmun, Torous John
Department of Psychiatry, New York University School of Medicine, New York, NY, USA.
Partnerships and Innovation, Inseparable, Los Angeles, CA, USA.
Lancet Digit Health. 2025 Apr;7(4):e282-e285. doi: 10.1016/S2589-7500(24)00255-3. Epub 2025 Jan 7.
Large language models (LLMs) offer promising applications in mental health care to address gaps in treatment and research. By leveraging clinical notes and transcripts as data, LLMs could improve diagnostics, monitoring, prevention, and treatment of mental health conditions. However, several challenges persist, including technical costs, literacy gaps, risk of biases, and inequalities in data representation. In this Viewpoint, we propose a sociocultural-technical approach to address these challenges. We highlight five key areas for development: (1) building a global clinical repository to support LLMs training and testing, (2) designing ethical usage settings, (3) refining diagnostic categories, (4) integrating cultural considerations during development and deployment, and (5) promoting digital inclusivity to ensure equitable access. We emphasise the need for developing representative datasets, interpretable clinical decision support systems, and new roles such as digital navigators. Only through collaborative efforts across all stakeholders, unified by a sociocultural-technical framework, can we clinically deploy LLMs while ensuring equitable access and mitigating risks.
大语言模型(LLMs)在精神卫生保健领域有着颇具前景的应用,以填补治疗和研究方面的空白。通过利用临床笔记和记录作为数据,大语言模型可以改善精神健康状况的诊断、监测、预防和治疗。然而,一些挑战依然存在,包括技术成本、文化素养差距、偏差风险以及数据代表性方面的不平等。在这一观点中,我们提出一种社会文化技术方法来应对这些挑战。我们强调五个关键发展领域:(1)建立一个全球临床知识库以支持大语言模型的训练和测试,(2)设计符合伦理的使用环境,(3)完善诊断类别,(4)在开发和部署过程中纳入文化考量,以及(5)促进数字包容性以确保公平获取。我们强调开发具有代表性的数据集、可解释的临床决策支持系统以及诸如数字导航员等新角色的必要性。只有通过所有利益相关者在社会文化技术框架下的共同努力,我们才能在临床上部署大语言模型,同时确保公平获取并降低风险。