School of Nursing, Columbia University, New York, NY, United States.
Department of Biomedical Informatics, Columbia University, New York, NY, United States.
JMIR Ment Health. 2024 Sep 18;11:e58462. doi: 10.2196/58462.
The application of artificial intelligence (AI) to health and health care is rapidly increasing. Several studies have assessed the attitudes of health professionals, but far fewer studies have explored the perspectives of patients or the general public. Studies investigating patient perspectives have focused on somatic issues, including those related to radiology, perinatal health, and general applications. Patient feedback has been elicited in the development of specific mental health care solutions, but broader perspectives toward AI for mental health care have been underexplored.
This study aims to understand public perceptions regarding potential benefits of AI, concerns about AI, comfort with AI accomplishing various tasks, and values related to AI, all pertaining to mental health care.
We conducted a 1-time cross-sectional survey with a nationally representative sample of 500 US-based adults. Participants provided structured responses on their perceived benefits, concerns, comfort, and values regarding AI for mental health care. They could also add free-text responses to elaborate on their concerns and values.
A plurality of participants (245/497, 49.3%) believed AI may be beneficial for mental health care, but this perspective differed based on sociodemographic variables (all P<.05). Specifically, Black participants (odds ratio [OR] 1.76, 95% CI 1.03-3.05) and those with lower health literacy (OR 2.16, 95% CI 1.29-3.78) perceived AI to be more beneficial, and women (OR 0.68, 95% CI 0.46-0.99) perceived AI to be less beneficial. Participants endorsed concerns about accuracy, possible unintended consequences such as misdiagnosis, the confidentiality of their information, and the loss of connection with their health professional when AI is used for mental health care. A majority of participants (80.4%, 402/500) valued being able to understand individual factors driving their risk, confidentiality, and autonomy as it pertained to the use of AI for their mental health. When asked who was responsible for the misdiagnosis of mental health conditions using AI, 81.6% (408/500) of participants found the health professional to be responsible. Qualitative results revealed similar concerns related to the accuracy of AI and how its use may impact the confidentiality of patients' information.
Future work involving the use of AI for mental health care should investigate strategies for conveying the level of AI's accuracy, factors that drive patients' mental health risks, and how data are used confidentially so that patients can determine with their health professionals when AI may be beneficial. It will also be important in a mental health care context to ensure the patient-health professional relationship is preserved when AI is used.
人工智能(AI)在健康和医疗保健领域的应用正在迅速增加。已有多项研究评估了卫生专业人员的态度,但很少有研究探讨患者或公众的观点。调查患者观点的研究侧重于躯体问题,包括与放射学、围产期健康和一般应用相关的问题。在开发特定的精神卫生保健解决方案时,已经征求了患者的反馈意见,但对精神卫生保健中人工智能的更广泛观点的研究还不够充分。
本研究旨在了解公众对人工智能的潜在益处、对人工智能的担忧、对人工智能完成各种任务的舒适度以及与人工智能相关的价值观的看法,所有这些都与精神卫生保健有关。
我们对 500 名美国成年人进行了一次性横断面调查,采用全国代表性样本。参与者对他们认为的人工智能在精神卫生保健方面的潜在益处、担忧、舒适度和价值观进行了结构化的回答。他们还可以添加自由文本回复来详细说明他们的担忧和价值观。
大多数参与者(497 名中的 245 名,49.3%)认为人工智能可能对精神卫生保健有益,但这种观点因社会人口统计学变量而异(均 P<.05)。具体来说,黑人参与者(优势比[OR]1.76,95%置信区间[CI]1.03-3.05)和健康素养较低的参与者(OR 2.16,95%CI 1.29-3.78)认为人工智能更有益,而女性(OR 0.68,95%CI 0.46-0.99)认为人工智能不太有益。参与者对准确性、可能的意外后果(如误诊)、其信息的机密性以及当人工智能用于精神卫生保健时与他们的卫生专业人员失去联系表示担忧。大多数参与者(500 名中的 80.4%,402 名)重视能够理解导致其风险的个人因素、保密性和自主性,因为这与使用人工智能来治疗他们的精神健康有关。当被问及谁应对使用人工智能诊断精神健康状况的误诊负责时,81.6%(500 名中的 408 名)的参与者认为卫生专业人员应负责。定性结果揭示了与人工智能的准确性以及其使用如何影响患者信息机密性相关的类似担忧。
未来涉及使用人工智能进行精神卫生保健的工作应研究传达人工智能准确性水平的策略、影响患者精神卫生风险的因素以及如何机密地使用数据,以便患者及其卫生专业人员确定何时人工智能可能有益。在精神卫生保健环境中,确保在使用人工智能时保持医患关系也很重要。