Yoo Dong Whi, Ernala Sindhu Kiranmai, Saket Bahador, Weir Domino, Arenare Elizabeth, Ali Asra F, Van Meter Anna R, Birnbaum Michael L, Abowd Gregory D, De Choudhury Munmun
School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, United States.
The Zucker Hillside Hospital, Northwell Health, Glen Oaks, NY, United States.
JMIR Ment Health. 2021 Nov 16;8(11):e25455. doi: 10.2196/25455.
Previous studies have suggested that social media data, along with machine learning algorithms, can be used to generate computational mental health insights. These computational insights have the potential to support clinician-patient communication during psychotherapy consultations. However, how clinicians perceive and envision using computational insights during consultations has been underexplored.
The aim of this study is to understand clinician perspectives regarding computational mental health insights from patients' social media activities. We focus on the opportunities and challenges of using these insights during psychotherapy consultations.
We developed a prototype that can analyze consented patients' Facebook data and visually represent these computational insights. We incorporated the insights into existing clinician-facing assessment tools, the Hamilton Depression Rating Scale and Global Functioning: Social Scale. The design intent is that a clinician will verbally interview a patient (eg, How was your mood in the past week?) while they reviewed relevant insights from the patient's social media activities (eg, number of depression-indicative posts). Using the prototype, we conducted interviews (n=15) and 3 focus groups (n=13) with mental health clinicians: psychiatrists, clinical psychologists, and licensed clinical social workers. The transcribed qualitative data were analyzed using thematic analysis.
Clinicians reported that the prototype can support clinician-patient collaboration in agenda-setting, communicating symptoms, and navigating patients' verbal reports. They suggested potential use scenarios, such as reviewing the prototype before consultations and using the prototype when patients missed their consultations. They also speculated potential negative consequences: patients may feel like they are being monitored, which may yield negative effects, and the use of the prototype may increase the workload of clinicians, which is already difficult to manage. Finally, our participants expressed concerns regarding the prototype: they were unsure whether patients' social media accounts represented their actual behaviors; they wanted to learn how and when the machine learning algorithm can fail to meet their expectations of trust; and they were worried about situations where they could not properly respond to the insights, especially emergency situations outside of clinical settings.
Our findings support the touted potential of computational mental health insights from patients' social media account data, especially in the context of psychotherapy consultations. However, sociotechnical issues, such as transparent algorithmic information and institutional support, should be addressed in future endeavors to design implementable and sustainable technology.
先前的研究表明,社交媒体数据与机器学习算法相结合,可用于生成有关心理健康的计算见解。这些计算见解有潜力在心理治疗咨询过程中支持临床医生与患者的沟通。然而,临床医生在咨询过程中如何看待和设想使用这些计算见解,尚未得到充分探索。
本研究旨在了解临床医生对患者社交媒体活动中计算心理健康见解的看法。我们关注在心理治疗咨询过程中使用这些见解的机遇和挑战。
我们开发了一个原型,它可以分析获得患者同意的Facebook数据,并直观地呈现这些计算见解。我们将这些见解纳入现有的面向临床医生的评估工具,即汉密尔顿抑郁量表和总体功能:社交量表。设计意图是,临床医生在查看患者社交媒体活动的相关见解(例如,显示抑郁的帖子数量)时,会口头询问患者(例如,你过去一周的情绪如何?)。使用该原型,我们对心理健康临床医生(精神科医生、临床心理学家和持牌临床社会工作者)进行了访谈(n = 15)和3个焦点小组讨论(n = 13)。使用主题分析法对转录的定性数据进行分析。
临床医生报告称,该原型可以在议程设定、症状沟通以及梳理患者口头报告方面支持临床医生与患者的协作。他们提出了潜在的使用场景,例如在咨询前查看原型以及在患者错过咨询时使用原型。他们还推测了潜在的负面后果:患者可能会觉得自己受到监视,这可能会产生负面影响,并且使用原型可能会增加临床医生本就难以管理的工作量。最后,我们的参与者对该原型表示担忧:他们不确定患者的社交媒体账户是否代表其实际行为;他们想了解机器学习算法在何时以及如何可能无法达到他们对信任的期望;他们担心在无法对这些见解做出适当回应的情况下,尤其是在临床环境之外的紧急情况下。
我们的研究结果支持了从患者社交媒体账户数据中获得的计算心理健康见解的潜在价值,尤其是在心理治疗咨询的背景下。然而,在未来设计可实施且可持续技术的努力中,应解决社会技术问题,如透明的算法信息和机构支持。