Suppr超能文献

表示学习:综述与新视角。

Representation learning: a review and new perspectives.

机构信息

Department of Computer Science and Operations Research, University of Montreal, Montreal, Quebec H3C 3J7, Canada.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2013 Aug;35(8):1798-828. doi: 10.1109/TPAMI.2013.50.

Abstract

The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.

摘要

机器学习算法的成功通常取决于数据表示,我们假设这是因为不同的表示可以交织和隐藏数据背后的不同解释变量的或多或少。虽然可以使用特定领域的知识来帮助设计表示,但也可以使用通用先验进行学习,而人工智能的探索正在推动更强大的表示学习算法的设计,以实现这些先验。本文综述了无监督特征学习和深度学习领域的最新工作,涵盖了概率模型、自动编码器、流形学习和深度网络方面的进展。这引发了关于学习良好表示的适当目标、表示计算(即推理)以及表示学习、密度估计和流形学习之间的几何联系的长期未解决的问题。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验