Yang Pingping, Ma Jiachen, Liu Yong, Liu Meng
Heilongjiang University, Harbin 150000, China.
National University of Defense Technology, Changsha 410073, China.
Math Biosci Eng. 2023 Jul 6;20(8):14699-14717. doi: 10.3934/mbe.2023657.
Fake news has already become a severe problem on social media, with substantially more detrimental impacts on society than previously thought. Research on multi-modal fake news detection has substantial practical significance since online fake news that includes multimedia elements are more likely to mislead users and propagate widely than text-only fake news. However, the existing multi-modal fake news detection methods have the following problems: 1) Existing methods usually use traditional CNN models and their variants to extract image features, which cannot fully extract high-quality visual features. 2) Existing approaches usually adopt a simple concatenate approach to fuse inter-modal features, leading to unsatisfactory detection results. 3) Most fake news has large disparity in feature similarity between images and texts, yet existing models do not fully utilize this aspect. Thus, we propose a novel model (TGA) based on transformers and multi-modal fusion to address the above problems. Specifically, we extract text and image features by different transformers and fuse features by attention mechanisms. In addition, we utilize the degree of feature similarity between texts and images in the classifier to improve the performance of TGA. Experimental results on the public datasets show the effectiveness of TGA*. * Our code is available at https://github.com/PPEXCEPED/TGA.
虚假新闻在社交媒体上已成为一个严重问题,其对社会的负面影响比此前认为的要大得多。多模态虚假新闻检测研究具有重大现实意义,因为包含多媒体元素的网络虚假新闻比纯文本虚假新闻更有可能误导用户并广泛传播。然而,现有的多模态虚假新闻检测方法存在以下问题:1)现有方法通常使用传统的卷积神经网络(CNN)模型及其变体来提取图像特征,无法充分提取高质量的视觉特征。2)现有方法通常采用简单的拼接方法来融合跨模态特征,导致检测结果不尽人意。3)大多数虚假新闻在图像和文本之间的特征相似度上存在很大差异,但现有模型并未充分利用这一方面。因此,我们提出了一种基于Transformer和多模态融合的新型模型(TGA)来解决上述问题。具体而言,我们通过不同的Transformer提取文本和图像特征,并通过注意力机制融合特征。此外,我们在分类器中利用文本和图像之间的特征相似度来提高TGA的性能。在公共数据集上的实验结果表明了TGA的有效性。* 我们的代码可在https://github.com/PPEXCEPED/TGA获取。