Hoel Erik
Allen Discovery Center, Tufts University, Medford, MA, USA.
Patterns (N Y). 2021 May 14;2(5):100244. doi: 10.1016/j.patter.2021.100244.
Understanding of the evolved biological function of sleep has advanced considerably in the past decade. However, no equivalent understanding of dreams has emerged. Contemporary neuroscientific theories often view dreams as epiphenomena, and many of the proposals for their biological function are contradicted by the phenomenology of dreams themselves. Now, the recent advent of deep neural networks (DNNs) has finally provided the novel conceptual framework within which to understand the evolved function of dreams. Notably, all DNNs face the issue of overfitting as they learn, which is when performance on one dataset increases but the network's performance fails to generalize (often measured by the divergence of performance on training versus testing datasets). This ubiquitous problem in DNNs is often solved by modelers via "noise injections" in the form of noisy or corrupted inputs. The goal of this paper is to argue that the brain faces a similar challenge of overfitting and that nightly dreams evolved to combat the brain's overfitting during its daily learning. That is, dreams are a biological mechanism for increasing generalizability via the creation of corrupted sensory inputs from stochastic activity across the hierarchy of neural structures. Sleep loss, specifically dream loss, leads to an overfitted brain that can still memorize and learn but fails to generalize appropriately. Herein this "overfitted brain hypothesis" is explicitly developed and then compared and contrasted with existing contemporary neuroscientific theories of dreams. Existing evidence for the hypothesis is surveyed within both neuroscience and deep learning, and a set of testable predictions is put forward that can be pursued both and .
在过去十年中,人们对睡眠进化后的生物学功能的理解有了显著进展。然而,对于梦却没有出现类似的理解。当代神经科学理论常常将梦视为副现象,而且许多关于梦的生物学功能的提议都与梦本身的现象学相矛盾。如今,深度神经网络(DNN)的最新出现终于提供了一个新颖的概念框架,用以理解梦的进化功能。值得注意的是,所有深度神经网络在学习时都面临过拟合问题,即当一个数据集上的性能提高,但网络的性能却无法泛化(通常通过训练数据集与测试数据集上性能的差异来衡量)。深度神经网络中这个普遍存在的问题,建模者通常通过以有噪声或损坏的输入形式进行“噪声注入”来解决。本文的目的是论证大脑面临类似的过拟合挑战,并且夜间的梦进化而来是为了在大脑日常学习过程中对抗过拟合。也就是说,梦是一种生物学机制,通过从神经结构层次中的随机活动创建损坏的感官输入来提高泛化能力。睡眠不足,特别是梦的缺失,会导致大脑过度拟合,使其仍能记忆和学习,但无法适当地泛化。在此,“过度拟合大脑假说”被明确提出,然后与现有的当代梦的神经科学理论进行比较和对比。在神经科学和深度学习领域都对该假说的现有证据进行了审视,并提出了一组可测试的预测,这些预测既可以在神经科学领域进行探索,也可以在深度学习领域进行探索。