Suppr超能文献

社交媒体上触发警告的使用:对X的文本分析研究

The use of trigger warnings on social media: a text analysis study of X.

作者信息

Vit Abigail Paradise, Puzis Rami

机构信息

Department of Information System, Max Stern Yezreel Valley College, Israel.

Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Be'er Sheva, Israel.

出版信息

PLoS One. 2025 Apr 30;20(4):e0322549. doi: 10.1371/journal.pone.0322549. eCollection 2025.

Abstract

Trigger warnings are placed at the beginning of potentially distressing content to provide individuals with the opportunity to avoid the content before exposure. Social media platforms use artificial intelligence to add automatic trigger warnings to certain images and videos, but are less commonly applied to textual content. This leaves the responsibility of adding trigger warnings to the authors, and a failure to do so may expose vulnerable users to sensitive or upsetting content. Due to limited research attention, there is a lack of understanding concerning what content is or is not considered triggering by social media users. To address this gap, we examine the use of trigger warnings in tweets on X, previously known as Twitter. We used a large language model (LLM) for zero-shot learning to identify the types of trigger warnings (e.g., violence, abuse) used in tweets and their prevalence. Additionally, we employed sentiment and emotion analysis to examine each trigger warning category, aiming to identify prevalent emotions and overall sentiment. Two datasets were collected: 48,168 tweets with explicit trigger warnings and 4,980,466 tweets with potentially triggering content. The analysis of the smaller dataset indicates that users have applied trigger warnings more frequently over the years and are applying them to a broader range of content categories than they did in the past. These findings may reflect users' growing interest in creating a safe space and a supportive online community that is aware of diverse sensitivities among users. Despite these findings, our analysis of the larger dataset confirms a lack of trigger warnings in most potentially triggering content.

摘要

触发警告会放在可能令人痛苦的内容开头,以便人们在接触之前有机会避开该内容。社交媒体平台利用人工智能为某些图像和视频自动添加触发警告,但在文本内容中较少使用。这就将添加触发警告的责任留给了作者,而未能这样做可能会使易受影响的用户接触到敏感或令人不安的内容。由于研究关注有限,对于哪些内容被社交媒体用户视为会触发的内容,哪些不会,人们缺乏了解。为了填补这一空白,我们研究了X(前身为推特)上推文触发警告的使用情况。我们使用大语言模型进行零样本学习,以识别推文中使用的触发警告类型(例如暴力、虐待)及其流行程度。此外,我们采用情感和情绪分析来审视每个触发警告类别,旨在识别普遍的情绪和整体情感。我们收集了两个数据集:48168条带有明确触发警告的推文和4980466条带有潜在触发内容的推文。对较小数据集的分析表明,多年来用户使用触发警告的频率更高,并且将其应用于比过去更广泛的内容类别。这些发现可能反映了用户对于创建一个安全空间以及一个了解用户间各种敏感性的支持性在线社区越来越感兴趣。尽管有这些发现,但我们对较大数据集的分析证实,在大多数潜在触发内容中都缺乏触发警告。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ca5/12043183/9f21c8dd01b5/pone.0322549.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验