Statistics Department, Florida State University, Tallahassee, FL 32306, USA.
Sensors (Basel). 2023 Apr 18;23(8):4072. doi: 10.3390/s23084072.
Neural networks are usually trained with different variants of gradient descent-based optimization algorithms such as the stochastic gradient descent or the Adam optimizer. Recent theoretical work states that the critical points (where the gradient of the loss is zero) of two-layer ReLU networks with the square loss are not all local minima. However, in this work, we will explore an algorithm for training two-layer neural networks with ReLU-like activation and the square loss that alternatively finds the critical points of the loss function analytically for one layer while keeping the other layer and the neuron activation pattern fixed. Experiments indicate that this simple algorithm can find deeper optima than stochastic gradient descent or the Adam optimizer, obtaining significantly smaller training loss values on four out of the five real datasets evaluated. Moreover, the method is faster than the gradient descent methods and has virtually no tuning parameters.
神经网络通常使用不同变体的基于梯度下降的优化算法进行训练,例如随机梯度下降或 Adam 优化器。最近的理论工作表明,具有平方损失的两层 ReLU 网络的临界点(损失梯度为零的点)并非都是局部最小值。然而,在这项工作中,我们将探索一种用于训练具有 ReLU 样激活和平方损失的两层神经网络的算法,该算法交替地为一层分析损失函数的临界点,同时保持另一层和神经元激活模式固定。实验表明,这种简单的算法可以找到比随机梯度下降或 Adam 优化器更深的最优解,在评估的五个真实数据集的四个数据集上,获得的训练损失值明显更小。此外,该方法比梯度下降方法更快,几乎没有可调参数。