Suppr超能文献

加权门控层自动编码器

Weighted Gate Layer Autoencoders.

作者信息

El-Fiqi Heba, Wang Min, Kasmarik Kathryn, Bezerianos Anastasios, Tan Kay Chen, Abbass Hussein A

出版信息

IEEE Trans Cybern. 2022 Aug;52(8):7242-7253. doi: 10.1109/TCYB.2021.3049583. Epub 2022 Jul 19.

Abstract

A single dataset could hide a significant number of relationships among its feature set. Learning these relationships simultaneously avoids the time complexity associated with running the learning algorithm for every possible relationship, and affords the learner with an ability to recover missing data and substitute erroneous ones by using available data. In our previous research, we introduced the gate-layer autoencoders (GLAEs), which offer an architecture that enables a single model to approximate multiple relationships simultaneously. GLAE controls what an autoencoder learns in a time series by switching on and off certain input gates, thus, allowing and disallowing the data to flow through the network to increase network's robustness. However, GLAE is limited to binary gates. In this article, we generalize the architecture to weighted gate layer autoencoders (WGLAE) through the addition of a weight layer to update the error according to which variables are more critical and to encourage the network to learn these variables. This new weight layer can also be used as an output gate and uses additional control parameters to afford the network with abilities to represent different models that can learn through gating the inputs. We compare the architecture against similar architectures in the literature and demonstrate that the proposed architecture produces more robust autoencoders with the ability to reconstruct both incomplete synthetic and real data with high accuracy.

摘要

单个数据集可能在其特征集中隐藏大量关系。同时学习这些关系可避免与针对每种可能关系运行学习算法相关的时间复杂性,并使学习者能够通过使用可用数据来恢复缺失数据并替换错误数据。在我们之前的研究中,我们引入了门控层自动编码器(GLAE),它提供了一种架构,使单个模型能够同时逼近多种关系。GLAE通过打开和关闭某些输入门来控制自动编码器在时间序列中学习的内容,从而允许和禁止数据流经网络,以提高网络的鲁棒性。然而,GLAE仅限于二进制门。在本文中,我们通过添加一个权重层来将该架构推广到加权门控层自动编码器(WGLAE),根据哪些变量更关键来更新误差,并鼓励网络学习这些变量。这个新的权重层还可以用作输出门,并使用额外的控制参数使网络能够表示不同的模型,这些模型可以通过对输入进行门控来学习。我们将该架构与文献中的类似架构进行比较,并证明所提出的架构能够产生更强大的自动编码器,能够以高精度重建不完整的合成数据和真实数据。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验