Celedonia Karen L, Corrales Compagnucci Marcelo, Minssen Timo, Lowery Wilson Michael
Injury Epidemiology and Prevention, Turku Brain Injury Centre, University of Turku and Turku University Hospital, Turku, Finland.
Center for Advanced Studies in Biomedical Innovation Law (CeBIL), University of Copenhagen, Copenhagen, Denmark.
J Law Biosci. 2021 Jul 15;8(1):lsab021. doi: 10.1093/jlb/lsab021. eCollection 2021 Jan-Jun.
Suicide remains a problem of public health importance worldwide. Cognizant of the emerging links between social media use and suicide, social media platforms, such as Facebook, have developed automated algorithms to detect suicidal behavior. While seemingly a well-intentioned adjunct to public health, there are several ethical and legal concerns to this approach. For example, the role of consent to use individual data in this manner has only been given cursory attention. Social media users may not even be aware that their social media posts, movements, and Internet searches are being analyzed by non-health professionals, who have the decision-making ability to involve law enforcement upon suspicion of potential self-harm. Failure to obtain such consent presents privacy risks and can lead to exposure and wider potential harms. We argue that Facebook's practices in this area should be subject to well-established protocols. These should resemble those utilized in the field of human subjects research, which upholds standardized, agreed-upon, and well-recognized ethical practices based on generations of precedent. Prior to collecting sensitive data from social media users, an ethical review process should be carried out. The fiduciary framework seems to resonate with the emergent roles and obligations of social media platforms to accept more responsibility for the content being shared.
自杀在全球范围内仍是一个具有公共卫生重要性的问题。鉴于社交媒体使用与自杀之间新出现的联系,诸如脸书等社交媒体平台已开发出自动算法来检测自杀行为。尽管这看似是对公共卫生有益的辅助手段,但这种方法存在若干伦理和法律问题。例如,对于以这种方式使用个人数据的同意问题,只是得到了粗略的关注。社交媒体用户甚至可能并未意识到,他们的社交媒体帖子、动态和网络搜索正被非卫生专业人员分析,而这些人员在怀疑存在潜在自我伤害时有权决定是否让执法部门介入。未能获得此类同意存在隐私风险,并可能导致曝光及更广泛的潜在危害。我们认为,脸书在这一领域的做法应遵循既定的规范。这些规范应类似于在人体研究领域所采用的规范,该领域基于几代人的先例秉持标准化、商定且公认的伦理做法。在从社交媒体用户收集敏感数据之前,应进行伦理审查程序。信托框架似乎与社交媒体平台为所分享内容承担更多责任而产生的新角色和义务相契合。