Department of Psychology, University of Texas at Austin Institute for Mental Health Research.
Colliga Apps Corporation, Austin, Texas.
Perspect Psychol Sci. 2023 Sep;18(5):1062-1096. doi: 10.1177/17456916221134490. Epub 2022 Dec 9.
Advances in computer science and data-analytic methods are driving a new era in mental health research and application. Artificial intelligence (AI) technologies hold the potential to enhance the assessment, diagnosis, and treatment of people experiencing mental health problems and to increase the reach and impact of mental health care. However, AI applications will not mitigate mental health disparities if they are built from historical data that reflect underlying social biases and inequities. AI models biased against sensitive classes could reinforce and even perpetuate existing inequities if these models create legacies that differentially impact who is diagnosed and treated, and how effectively. The current article reviews the health-equity implications of applying AI to mental health problems, outlines state-of-the-art methods for assessing and mitigating algorithmic bias, and presents a call to action to guide the development of in psychological science.
计算机科学和数据分析方法的进步正在推动心理健康研究和应用的新时代。人工智能 (AI) 技术有可能增强对心理健康问题患者的评估、诊断和治疗,并扩大心理健康护理的范围和影响。然而,如果人工智能应用是基于反映潜在社会偏见和不平等的历史数据构建的,那么它们并不能减轻心理健康方面的差异。如果这些模型产生的影响会有差异,例如对谁进行诊断和治疗以及治疗效果如何,那么针对敏感群体的人工智能模型可能会加剧甚至延续现有的不平等现象。本文回顾了将人工智能应用于心理健康问题的健康公平影响,概述了评估和减轻算法偏差的最新方法,并提出了一项行动呼吁,以指导心理科学中的人工智能发展。