Suppr超能文献

新型高效 RNN 和类似 LSTM 的架构:递归和门控广义学习系统及其在文本分类中的应用。

Novel Efficient RNN and LSTM-Like Architectures: Recurrent and Gated Broad Learning Systems and Their Applications for Text Classification.

出版信息

IEEE Trans Cybern. 2021 Mar;51(3):1586-1597. doi: 10.1109/TCYB.2020.2969705. Epub 2021 Feb 17.

Abstract

High accuracy of text classification can be achieved through simultaneous learning of multiple information, such as sequence information and word importance. In this article, a kind of flat neural networks called the broad learning system (BLS) is employed to derive two novel learning methods for text classification, including recurrent BLS (R-BLS) and long short-term memory (LSTM)-like architecture: gated BLS (G-BLS). The proposed two methods possess three advantages: 1) higher accuracy due to the simultaneous learning of multiple information, even compared to deep LSTM that extracts deeper but single information only; 2) significantly faster training time due to the noniterative learning in BLS, compared to LSTM; and 3) easy integration with other discriminant information for further improvement. The proposed methods have been evaluated over 13 real-world datasets from various types of text classification. From the experimental results, the proposed methods achieve higher accuracies than LSTM while taking significantly less training time on most evaluated datasets, especially when the LSTM is in deep architecture. Compared to R-BLS, G-BLS has an extra forget gate to control the flow of information (similar to LSTM) to further improve the accuracy on text classification so that G-BLS is more effective while R-BLS is more efficient.

摘要

通过同时学习多种信息,如序列信息和单词重要性,可以实现高精度的文本分类。本文采用一种称为广义学习系统(BLS)的平面神经网络,为文本分类推导出两种新的学习方法,包括递归 BLS(R-BLS)和类似长短期记忆(LSTM)的架构:门控 BLS(G-BLS)。所提出的两种方法具有三个优点:1)由于同时学习多种信息,即使与仅提取更深但单一信息的深度 LSTM 相比,也具有更高的准确性;2)由于 BLS 中的非迭代学习,与 LSTM 相比,训练时间显著缩短;3)易于与其他判别信息集成,以进一步提高性能。所提出的方法已经在来自各种类型的文本分类的 13 个真实数据集上进行了评估。从实验结果来看,与 LSTM 相比,所提出的方法在大多数评估数据集上具有更高的准确性,同时训练时间明显缩短,尤其是当 LSTM 处于深度架构时。与 R-BLS 相比,G-BLS 具有额外的遗忘门来控制信息流(类似于 LSTM),从而进一步提高文本分类的准确性,因此 G-BLS 更有效,而 R-BLS 更高效。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验