Suppr超能文献

用于协助人工审核以预防自杀的推文分类

Tweet Classification to Assist Human Moderation for Suicide Prevention.

作者信息

Sawhney Ramit, Joshi Harshit, Nobles Alicia, Shah Rajiv Ratn

机构信息

Netaji Subhas Institute of Technology.

University of Delhi.

出版信息

Proc Int AAAI Conf Weblogs Soc Media. 2021 Jun 4;15:609-620. Epub 2021 May 22.

Abstract

Social media platforms are already engaged in leveraging existing online socio-technical systems to employ just-in-time interventions for suicide prevention to the public. These efforts primarily rely on self-reports of potential self-harm content that is reviewed by moderators. Most recently, platforms have employed automated models to identify self-harm content, but acknowledge that these automated models still struggle to understand the nuance of human language (e.g., sarcasm). By explicitly focusing on Twitter posts that could easily be misidentified by a model as expressing suicidal intent (i.e., they contain similar phrases such as "wanting to die"), our work examines the temporal differences in historical expressions of general and emotional language prior to a clear expression of suicidal intent. Additionally, we analyze time-aware neural models that build on these language variants and factors in the historical, emotional spectrum of a user's tweeting activity. The strongest model achieves high (statistically significant) performance (macro F1=0.804, recall=0.813) to identify social media indicative of suicidal intent. Using three use cases of tweets with phrases common to suicidal intent, we qualitatively analyze and interpret how such models decided if suicidal intent was present and discuss how these analyses may be used to alleviate the burden on human moderators within the known constraints of how moderation is performed (e.g., no access to the user's timeline). Finally, we discuss the ethical implications of such data-driven models and inferences about suicidal intent from social media.

摘要

社交媒体平台已经在利用现有的在线社会技术系统,向公众提供即时自杀预防干预措施。这些努力主要依赖于版主对潜在自我伤害内容的自我报告进行审核。最近,平台采用了自动化模型来识别自我伤害内容,但也承认这些自动化模型仍难以理解人类语言的细微差别(例如讽刺)。通过明确关注那些可能被模型轻易误判为表达自杀意图的推特帖子(即它们包含类似“想死”这样的短语),我们的研究考察了在明确表达自杀意图之前,一般性语言和情感性语言的历史表达在时间上的差异。此外,我们分析了基于这些语言变体以及用户推特活动的历史情感频谱因素的时间感知神经模型。最强的模型在识别表明自杀意图的社交媒体方面取得了较高(具有统计学意义)的性能(宏F1 = 0.804,召回率 = 0.813)。通过使用三个包含自杀意图常见短语的推特用例,我们定性地分析和解释了此类模型是如何判定是否存在自杀意图的,并讨论了这些分析如何在已知的审核执行限制(例如无法访问用户的时间线)内减轻人工版主的负担。最后,我们讨论了此类数据驱动模型的伦理影响以及从社交媒体推断自杀意图的问题。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2021/8843106/0418ec87ff08/nihms-1774843-f0002.jpg

相似文献

4
Classification of Twitter Users Who Tweet About E-Cigarettes.发布关于电子烟推文的推特用户分类。
JMIR Public Health Surveill. 2017 Sep 26;3(3):e63. doi: 10.2196/publichealth.8060.

本文引用的文献

8
Natural Language Processing of Social Media as Screening for Suicide Risk.社交媒体的自然语言处理用于自杀风险筛查。
Biomed Inform Insights. 2018 Aug 27;10:1178222618792860. doi: 10.1177/1178222618792860. eCollection 2018.
9
Focal Loss for Dense Object Detection.用于密集目标检测的焦散损失
IEEE Trans Pattern Anal Mach Intell. 2020 Feb;42(2):318-327. doi: 10.1109/TPAMI.2018.2858826. Epub 2018 Jul 23.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验