Haime Zoë, Biddle Lucy
Population Health Sciences, University of Bristol, Bristol, United Kingdom.
NIHR Bristol Biomedical Research Centre, Bristol, United Kingdom.
JMIR Hum Factors. 2025 May 29;12:e69817. doi: 10.2196/69817.
Social networking site (SNS) users may experience mental health difficulties themselves or engage with mental health-related content on these platforms. While SNSs use moderation systems and user tools to limit harmful content availability, concerns persist regarding the implementation and effectiveness of these methods.
This study aimed to use an ethnographic walkthrough method to critically evaluate 4 SNSs-Instagram, TikTok, Tumblr, and Tellmi.
Walkthrough methods were used to identify and analyze mental health content moderation and safety and well-being resources of SNS platforms. We completed systematic checklists for each of the SNS platforms and then used thematic analysis to interpret the data.
Findings highlighted both successes and challenges in balancing user safety and content moderation across platforms. While varied mental health resources were available on platforms, several issues emerged, including redundancy of information, broken links, and a lack of non-US-centric resources. In addition, despite the presence of several self-moderation tool options, there was insufficient evidence of user education and testing around these features, potentially limiting their effectiveness. Platforms also faced difficulties addressing harmful mental health content due to unclear language around what was allowed or disallowed. This was especially evident in the management of mental health-related terminology, where the emergence of "algospeak," where users adopt alternative codewords or phrases to avoid having content removed or banned by moderation systems, highlighted how users easily bypass platform censorship. Furthermore, platforms did not detail support for reporters or reportees of mental health-related content, leaving users susceptible.
Our study resulted in the production of preliminary recommendations for platforms regarding potential mental health content moderation and well-being procedures and tools. We also emphasized the need for more inclusive user-centered design, feedback, and research to improve SNS safety and moderation features.
社交网站(SNS)用户自身可能会经历心理健康问题,或者在这些平台上接触与心理健康相关的内容。虽然社交网站使用审核系统和用户工具来限制有害内容的传播,但人们对这些方法的实施和有效性仍存在担忧。
本研究旨在运用人种志浏览方法,对4个社交网站——照片墙(Instagram)、抖音(TikTok)、汤博乐(Tumblr)和Tellmi进行批判性评估。
采用浏览方法来识别和分析社交网络平台的心理健康内容审核以及安全与福祉资源。我们为每个社交网络平台完成了系统检查表,然后运用主题分析来解读数据。
研究结果凸显了在跨平台平衡用户安全和内容审核方面的成功与挑战。虽然平台上有各种各样的心理健康资源,但出现了几个问题,包括信息冗余、链接失效以及缺乏非以美国为中心的资源。此外,尽管有几种自我审核工具选项,但围绕这些功能的用户教育和测试证据不足,可能会限制其有效性。平台在处理有害心理健康内容方面也面临困难,因为关于允许或禁止内容的表述不明确。这在心理健康相关术语的管理中尤为明显,“算法语言”的出现表明,用户采用替代的编码词或短语来避免让内容被审核系统删除或封禁,凸显了用户如何轻易绕过平台审查。此外,平台没有详细说明对心理健康相关内容举报者或被举报者的支持,使用户易受伤害。
我们的研究为平台提出了关于潜在心理健康内容审核以及福祉程序和工具的初步建议。我们还强调需要进行更具包容性的以用户为中心的设计、反馈和研究,以改善社交网络的安全和审核功能。