Fan Lizhou, Li Lingyao, Hemphill Libby
School of Information, University of Michigan, Ann Arbor, MI, United States.
J Med Internet Res. 2024 Dec 12;26:e52997. doi: 10.2196/52997.
Toxicity on social media, encompassing behaviors such as harassment, bullying, hate speech, and the dissemination of misinformation, has become a pressing social concern in the digital age. Its prevalence intensifies during periods of social crises and unrest, eroding a sense of safety and community. Such toxic environments can adversely impact the mental well-being of those exposed and further deepen societal divisions and polarization. The 2022 mpox outbreak, initially called "monkeypox" but later renamed to reduce stigma and address societal concerns, provides a relevant context for this issue.
In this study, we conducted a comprehensive analysis of the toxic online discourse surrounding the 2022 mpox outbreak. We aimed to dissect its origins, characterize its nature and content, trace its dissemination patterns, and assess its broader societal implications, with the goal of providing insights that can inform strategies to mitigate such toxicity in future crises.
We collected >1.6 million unique tweets and analyzed them with 5 dimensions: context, extent, content, speaker, and intent. Using topic modeling based on bidirectional encoder representations from transformers and social network community clustering, we delineated the toxic dynamics on Twitter.
By categorizing topics, we identified 5 high-level categories in the toxic online discourse on Twitter, including disease (20,281/43,521, 46.6%), health policy and health care (8400/43,521, 19.3%), homophobia (10,402/43,521, 23.9%), politics (2611/43,521, 6%), and racism (1784/43,521, 4.1%). Across these categories, users displayed negativity or controversial views on the mpox outbreak, highlighting the escalating political tensions and the weaponization of stigma during this infodemic. Through the toxicity diffusion networks of mentions (17,437 vertices with 3628 clusters), retweets (59,749 vertices with 3015 clusters), and the top users with the highest in-degree centrality, we found that retweets of toxic content were widespread, while influential users rarely engaged with or countered this toxicity through retweets.
Our study introduces a comprehensive workflow that combines topical and network analyses to decode emerging social issues during crises. By tracking topical dynamics, we can track the changing popularity of toxic content on the internet, providing a better understanding of societal challenges. Network dynamics highlight key social media influencers and their intentions, suggesting that engaging with these central figures in toxic discourse can improve crisis communication and guide policy making.
社交媒体上的有害行为,包括骚扰、欺凌、仇恨言论和错误信息传播等,已成为数字时代紧迫的社会问题。在社会危机和动荡时期,此类行为愈发普遍,侵蚀着安全感和社区感。这种有害环境会对接触者的心理健康产生不利影响,并进一步加深社会分裂和两极分化。2022年猴痘疫情爆发,最初被称为“猴痘”,但后来更名以减少耻辱感并回应社会关切,为这一问题提供了相关背景。
在本研究中,我们对围绕2022年猴痘疫情的有害网络言论进行了全面分析。我们旨在剖析其根源,描述其性质和内容,追踪其传播模式,并评估其更广泛的社会影响,以期提供见解,为未来危机中减轻此类有害性的策略提供参考。
我们收集了超过160万条独特推文,并从5个维度进行分析:背景、范围、内容、发言者和意图。利用基于变压器双向编码器表示的主题建模和社交网络社区聚类,我们描绘了推特上的有害动态。
通过对主题进行分类,我们在推特上的有害网络言论中确定了5个高级类别,包括疾病(20281/43521,46.6%)、卫生政策和医疗保健(8400/43521,19.3%)、恐同症(10402/43521,23.9%)、政治(2611/43521,6%)和种族主义(1784/43521,4.1%)。在这些类别中,用户对猴痘疫情表现出消极或有争议的观点,凸显了在这场信息疫情期间不断升级的政治紧张局势和耻辱感的武器化。通过提及(17437个节点,3628个聚类)、转发(59749个节点,3015个聚类)以及入度中心性最高的顶级用户的有害性扩散网络,我们发现有害内容的转发很普遍,而有影响力的用户很少通过转发参与或抵制这种有害性。
我们的研究引入了一种综合工作流程,将主题分析和网络分析相结合,以解码危机期间出现的社会问题。通过追踪主题动态,我们可以追踪互联网上有害内容的流行变化,更好地理解社会挑战。网络动态突出了关键的社交媒体影响者及其意图,表明在有害言论中与这些核心人物互动可以改善危机沟通并指导政策制定。