Malik Muhammad Shahid Iqbal, Imran Tahir, Mona Mamdouh Jamjoom
Department of Computer Science, School of Data Analysis and Artificial Intelligence, Higher School of Economics, Moscow, Russia.
Department of Computer Science, Capital University of Science and Technology, Islamabad, Pakistan.
PeerJ Comput Sci. 2023 Feb 20;9:e1248. doi: 10.7717/peerj-cs.1248. eCollection 2023.
Online propaganda is a mechanism to influence the opinions of social media users. It is a growing menace to public health, democratic institutions, and public society. The present study proposes a propaganda detection framework as a binary classification model based on a news repository. Several feature models are explored to develop a robust model such as part-of-speech, LIWC, word uni-gram, Embeddings from Language Models (ELMo), FastText, word2vec, latent semantic analysis (LSA), and char tri-gram feature models. Moreover, fine-tuning of the BERT is also performed. Three oversampling methods are investigated to handle the imbalance status of the Qprop dataset. SMOTE Edited Nearest Neighbors (ENN) presented the best results. The fine-tuning of BERT revealed that the BERT-320 sequence length is the best model. As a standalone model, the char tri-gram presented superior performance as compared to other features. The robust performance is observed against the combination of char tri-gram + BERT and char tri-gram + word2vec and they outperformed the two state-of-the-art baselines. In contrast to prior approaches, the addition of feature selection further improves the performance and achieved more than 97.60% recall, f1-score, and AUC on the dev and test part of the dataset. The findings of the present study can be used to organize news articles for various public news websites.
网络宣传是一种影响社交媒体用户观点的机制。它对公众健康、民主机构和公共社会构成了日益严重的威胁。本研究提出了一个基于新闻库的宣传检测框架作为二分类模型。探索了几种特征模型来开发一个强大的模型,如词性、LIWC、单词一元语法、语言模型嵌入(ELMo)、FastText、词向量、潜在语义分析(LSA)和字符三元语法特征模型。此外,还对BERT进行了微调。研究了三种过采样方法来处理Qprop数据集的不平衡状态。合成少数过采样技术编辑最近邻法(SMOTE Edited Nearest Neighbors,ENN)呈现出最佳结果。BERT的微调表明,BERT-320序列长度是最佳模型。作为一个独立模型,字符三元语法与其他特征相比表现出卓越的性能。在字符三元语法+BERT和字符三元语法+词向量的组合中观察到了强大的性能,它们优于两个最先进的基线。与先前的方法相比,添加特征选择进一步提高了性能,在数据集的开发和测试部分实现了超过97.60%的召回率、F1分数和曲线下面积(AUC)。本研究的结果可用于为各种公共新闻网站整理新闻文章。