• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

如何从社交媒体中检测宣传内容?语义和微调语言模型的应用

How to detect propaganda from social media? Exploitation of semantic and fine-tuned language models.

作者信息

Malik Muhammad Shahid Iqbal, Imran Tahir, Mona Mamdouh Jamjoom

机构信息

Department of Computer Science, School of Data Analysis and Artificial Intelligence, Higher School of Economics, Moscow, Russia.

Department of Computer Science, Capital University of Science and Technology, Islamabad, Pakistan.

出版信息

PeerJ Comput Sci. 2023 Feb 20;9:e1248. doi: 10.7717/peerj-cs.1248. eCollection 2023.

DOI:10.7717/peerj-cs.1248
PMID:37346552
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10280574/
Abstract

Online propaganda is a mechanism to influence the opinions of social media users. It is a growing menace to public health, democratic institutions, and public society. The present study proposes a propaganda detection framework as a binary classification model based on a news repository. Several feature models are explored to develop a robust model such as part-of-speech, LIWC, word uni-gram, Embeddings from Language Models (ELMo), FastText, word2vec, latent semantic analysis (LSA), and char tri-gram feature models. Moreover, fine-tuning of the BERT is also performed. Three oversampling methods are investigated to handle the imbalance status of the Qprop dataset. SMOTE Edited Nearest Neighbors (ENN) presented the best results. The fine-tuning of BERT revealed that the BERT-320 sequence length is the best model. As a standalone model, the char tri-gram presented superior performance as compared to other features. The robust performance is observed against the combination of char tri-gram + BERT and char tri-gram + word2vec and they outperformed the two state-of-the-art baselines. In contrast to prior approaches, the addition of feature selection further improves the performance and achieved more than 97.60% recall, f1-score, and AUC on the dev and test part of the dataset. The findings of the present study can be used to organize news articles for various public news websites.

摘要

网络宣传是一种影响社交媒体用户观点的机制。它对公众健康、民主机构和公共社会构成了日益严重的威胁。本研究提出了一个基于新闻库的宣传检测框架作为二分类模型。探索了几种特征模型来开发一个强大的模型,如词性、LIWC、单词一元语法、语言模型嵌入(ELMo)、FastText、词向量、潜在语义分析(LSA)和字符三元语法特征模型。此外,还对BERT进行了微调。研究了三种过采样方法来处理Qprop数据集的不平衡状态。合成少数过采样技术编辑最近邻法(SMOTE Edited Nearest Neighbors,ENN)呈现出最佳结果。BERT的微调表明,BERT-320序列长度是最佳模型。作为一个独立模型,字符三元语法与其他特征相比表现出卓越的性能。在字符三元语法+BERT和字符三元语法+词向量的组合中观察到了强大的性能,它们优于两个最先进的基线。与先前的方法相比,添加特征选择进一步提高了性能,在数据集的开发和测试部分实现了超过97.60%的召回率、F1分数和曲线下面积(AUC)。本研究的结果可用于为各种公共新闻网站整理新闻文章。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/391e/10280574/08432e6c6720/peerj-cs-09-1248-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/391e/10280574/08432e6c6720/peerj-cs-09-1248-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/391e/10280574/08432e6c6720/peerj-cs-09-1248-g001.jpg

相似文献

1
How to detect propaganda from social media? Exploitation of semantic and fine-tuned language models.如何从社交媒体中检测宣传内容?语义和微调语言模型的应用
PeerJ Comput Sci. 2023 Feb 20;9:e1248. doi: 10.7717/peerj-cs.1248. eCollection 2023.
2
Pashto offensive language detection: a benchmark dataset and monolingual Pashto BERT.普什图语冒犯性语言检测:一个基准数据集和单语普什图语BERT
PeerJ Comput Sci. 2023 Oct 18;9:e1617. doi: 10.7717/peerj-cs.1617. eCollection 2023.
3
Identification of offensive language in Urdu using semantic and embedding models.使用语义和嵌入模型识别乌尔都语中的冒犯性语言。
PeerJ Comput Sci. 2022 Dec 12;8:e1169. doi: 10.7717/peerj-cs.1169. eCollection 2022.
4
Rumour identification on Twitter as a function of novel textual and language-context features.基于新颖文本和语言上下文特征的推特谣言识别
Multimed Tools Appl. 2023;82(5):7017-7038. doi: 10.1007/s11042-022-13595-4. Epub 2022 Aug 12.
5
Extracting comprehensive clinical information for breast cancer using deep learning methods.利用深度学习方法提取乳腺癌全面临床信息。
Int J Med Inform. 2019 Dec;132:103985. doi: 10.1016/j.ijmedinf.2019.103985. Epub 2019 Oct 2.
6
Fine-Tuning Large Language Models to Enhance Programmatic Assessment in Graduate Medical Education.微调大语言模型以加强毕业后医学教育中的程序化评估。
J Educ Perioper Med. 2024 Sep 30;26(3):E729. doi: 10.46374/VolXXVI_Issue3_Moore. eCollection 2024 Jul-Sep.
7
Transfer Learning for Sentiment Analysis Using BERT Based Supervised Fine-Tuning.基于 BERT 的有监督微调的情感分析中的迁移学习。
Sensors (Basel). 2022 May 30;22(11):4157. doi: 10.3390/s22114157.
8
Comparison of different feature extraction methods for applicable automated ICD coding.不同特征提取方法在适用的自动化 ICD 编码中的比较。
BMC Med Inform Decis Mak. 2022 Jan 12;22(1):11. doi: 10.1186/s12911-022-01753-5.
9
Categorization of tweets for damages: infrastructure and human damage assessment using fine-tuned BERT model.用于损害分类的推文:使用微调BERT模型进行基础设施和人员损害评估
PeerJ Comput Sci. 2024 Feb 16;10:e1859. doi: 10.7717/peerj-cs.1859. eCollection 2024.
10
Multi-class sentiment analysis of urdu text using multilingual BERT.使用多语言 BERT 进行乌尔都语文本的多类情感分析。
Sci Rep. 2022 Mar 31;12(1):5436. doi: 10.1038/s41598-022-09381-9.

引用本文的文献

1
Novel approach for Arabic fake news classification using embedding from large language features with CNN-LSTM ensemble model and explainable AI.使用基于大语言特征的嵌入、CNN-LSTM集成模型和可解释人工智能的阿拉伯语假新闻分类新方法。
Sci Rep. 2024 Dec 16;14(1):30463. doi: 10.1038/s41598-024-82111-5.
2
Categorization of tweets for damages: infrastructure and human damage assessment using fine-tuned BERT model.用于损害分类的推文:使用微调BERT模型进行基础设施和人员损害评估
PeerJ Comput Sci. 2024 Feb 16;10:e1859. doi: 10.7717/peerj-cs.1859. eCollection 2024.

本文引用的文献

1
The spread of low-credibility content by social bots.社交机器人传播低可信度内容。
Nat Commun. 2018 Nov 20;9(1):4787. doi: 10.1038/s41467-018-06930-7.