Suppr超能文献

学着观察事物。

Learning to see stuff.

作者信息

Fleming Roland W, Storrs Katherine R

机构信息

Justus-Liebig-Universität Giessen, Germany.

出版信息

Curr Opin Behav Sci. 2019 Dec;30:100-108. doi: 10.1016/j.cobeha.2019.07.004.

Abstract

Materials with complex appearances, like textiles and foodstuffs, pose challenges for conventional theories of vision. But recent advances in unsupervised deep learning provide a framework for explaining how we learn to see them. We suggest that perception does not involve estimating physical quantities like reflectance or lighting. Instead, representations emerge from learning to encode and predict the visual input as efficiently and accurately as possible. Neural networks can be trained to compress natural images or to predict frames in movies without 'ground truth' data about the outside world. Yet, to succeed, such systems may automatically discover how to disentangle distal causal factors. Such 'statistical appearance models' potentially provide a coherent explanation of both failures and successes in perception.

摘要

具有复杂外观的材料,如纺织品和食品,给传统视觉理论带来了挑战。但无监督深度学习的最新进展提供了一个框架,用于解释我们如何学会识别它们。我们认为,感知并不涉及估计诸如反射率或光照等物理量。相反,表征是通过学习尽可能高效和准确地编码和预测视觉输入而产生的。神经网络可以在没有关于外部世界的“真实”数据的情况下进行训练,以压缩自然图像或预测电影中的帧。然而,为了成功,这样的系统可能会自动发现如何解开远端因果因素。这种“统计外观模型”可能为感知中的失败和成功提供一个连贯的解释。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f709/6919301/7eb35e6a7f55/gr1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验