Vanchurin Vitaly
Department of Physics, University of Minnesota, Duluth, Minnesota, MN 55812, USA.
Entropy (Basel). 2020 Oct 26;22(11):1210. doi: 10.3390/e22111210.
We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: "trainable" variables (e.g., bias vector or weight matrix) and "hidden" variables (e.g., state vector of neurons). We first consider stochastic evolution of the trainable variables to argue that near equilibrium their dynamics is well approximated by Madelung equations (with free energy representing the phase) and further away from the equilibrium by Hamilton-Jacobi equations (with free energy representing the Hamilton's principal function). This shows that the trainable variables can indeed exhibit classical and quantum behaviors with the state vector of neurons representing the hidden variables. We then study stochastic evolution of the hidden variables by considering non-interacting subsystems with average state vectors, x¯1, …, x¯D and an overall average state vector x¯0. In the limit when the weight matrix is a permutation matrix, the dynamics of x¯μ can be described in terms of relativistic strings in an emergent D+1 dimensional Minkowski space-time. If the subsystems are minimally interacting, with interactions that are described by a metric tensor, and then the emergent space-time becomes curved. We argue that the entropy production in such a system is a local function of the metric tensor which should be determined by the symmetries of the Onsager tensor. It turns out that a very simple and highly symmetric Onsager tensor leads to the entropy production described by the Einstein-Hilbert term. This shows that the learning dynamics of a neural network can indeed exhibit approximate behaviors that were described by both quantum mechanics and general relativity. We also discuss a possibility that the two descriptions are holographic duals of each other.
我们探讨了一种可能性,即整个宇宙在其最基本层面上是一个神经网络。我们识别出两种不同类型的动力学自由度:“可训练”变量(例如偏置向量或权重矩阵)和“隐藏”变量(例如神经元的状态向量)。我们首先考虑可训练变量的随机演化,以论证在接近平衡时其动力学可由马德隆方程(其中自由能表示相位)很好地近似,而在远离平衡时则由哈密顿 - 雅可比方程(其中自由能表示哈密顿主函数)近似。这表明可训练变量确实可以表现出经典和量子行为,其中神经元的状态向量代表隐藏变量。然后,我们通过考虑具有平均状态向量(\overline{x}_1,\ldots,\overline{x}_D)和总体平均状态向量(\overline{x}0)的非相互作用子系统来研究隐藏变量的随机演化。在权重矩阵为置换矩阵的极限情况下,(\overline{x}\mu)的动力学可以用涌现的(D + 1)维闵可夫斯基时空里的相对论弦来描述。如果子系统是最小相互作用的,其相互作用由度规张量描述,那么涌现的时空就会弯曲。我们认为这种系统中的熵产生是度规张量的局部函数,它应由昂萨格张量的对称性决定。结果表明,一个非常简单且高度对称的昂萨格张量会导致由爱因斯坦 - 希尔伯特项描述的熵产生。这表明神经网络的学习动力学确实可以表现出量子力学和广义相对论所描述的近似行为。我们还讨论了这两种描述可能是彼此全息对偶的可能性。