Suppr超能文献

使用具有成本相关稀疏自编码器学习特征表示。

Learning feature representations with a cost-relevant sparse autoencoder.

机构信息

School of Science and Technology, Applied Autonomous Sensor Systems, Örebro University, SE-701 82, Örebro, Sweden.

出版信息

Int J Neural Syst. 2015 Feb;25(1):1450034. doi: 10.1142/S0129065714500348.

Abstract

There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder.

摘要

机器学习社区越来越感兴趣的是能够直接从(未标记的)数据中自动学习特征表示,而不是使用手工设计的特征。自动编码器是可用于此目的的一种方法。但是,对于具有高度噪声的数据集合,自动编码器中的大量表示能力用于最小化这些噪声输入的重建误差。本文提出了一种通过关注数据中与任务相关的信息来改进特征学习过程的方法。这种选择性注意是通过对重建误差进行加权以及在学习过程中降低噪声输入的影响来实现的。所提出的模型在一些公开可用的图像数据集上进行了训练,并将测试错误率与标准稀疏自动编码器和其他方法(例如去噪自动编码器和收缩自动编码器)进行了比较。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验