Suppr超能文献

利用迁移学习方法进行文本情感检测。

Textual emotion detection utilizing a transfer learning approach.

作者信息

Hadikhah Mozhdehi Mahsa, Eftekhari Moghadam AmirMasoud

机构信息

Faculty of Computer and Information Technology, Islamic Azad University, Qazvin, Iran.

出版信息

J Supercomput. 2023 Mar 22:1-15. doi: 10.1007/s11227-023-05168-5.

Abstract

Many attempts have been made to overcome the challenges of automating textual emotion detection using different traditional deep learning models such as LSTM, GRU, and BiLSTM. But the problem with these models is that they need large datasets, massive computing resources, and a lot of time to train. Also, they are prone to forgetting and cannot perform well when applied to small datasets. In this paper, we aim to demonstrate the capability of transfer learning techniques to capture the better contextual meaning of the text and as a result better detection of the emotion represented in the text, even without a large amount of data and training time. To do this, we conduct an experiment utilizing a pre-trained model called EmotionalBERT, which is based on bidirectional encoder representations from transformers (BERT), and we compare its performance to RNN-based models on two benchmark datasets, with a focus on the amount of training data and how it affects the models' performance.

摘要

人们已经进行了许多尝试,以克服使用不同的传统深度学习模型(如长短期记忆网络(LSTM)、门控循环单元(GRU)和双向长短期记忆网络(BiLSTM))实现文本情感检测自动化的挑战。但这些模型的问题在于,它们需要大量数据集、海量计算资源以及大量时间来进行训练。此外,它们容易遗忘,并且在应用于小数据集时表现不佳。在本文中,我们旨在展示迁移学习技术捕捉文本更好上下文含义的能力,从而即使在没有大量数据和训练时间的情况下,也能更好地检测文本中所表达的情感。为此,我们利用一个名为情感BERT(EmotionalBERT)的预训练模型进行了一项实验,该模型基于变换器双向编码器表征(BERT),并且我们在两个基准数据集上,将其性能与基于循环神经网络(RNN)的模型进行比较,重点关注训练数据量及其对模型性能的影响。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/80a5/10032627/3a851cf1f862/11227_2023_5168_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验