Department of Computer Science, Cukurova University Adana, Turkey.
Scientific Networking Division, Lawrence Berkeley National Laboratory, Berkeley, CA, United States of America.
Neural Netw. 2021 Nov;143:564-571. doi: 10.1016/j.neunet.2021.07.010. Epub 2021 Jul 12.
Incorporating higher-order optimization functions, such as Levenberg-Marquardt (LM) have revealed better generalizable solutions for deep learning problems. However, these higher-order optimization functions suffer from very large processing time and training complexity especially as training datasets become large, such as in multi-view classification problems, where finding global optima is a very costly problem. To solve this issue, we develop a solution for LM-enabled classification with, to the best of knowledge first-time implementation of hinge loss, for multiview classification. Hinge loss allows the neural network to converge faster and perform better than other loss functions such as logistic or square loss rates. We prove our method by experimenting with various multiclass classification challenges of varying complexity and training data size. The empirical results show the training time and accuracy rates achieved, highlighting how our method outperforms in all cases, especially when training time is limited. Our paper presents important results in the relationship between optimization and loss functions and how these can impact deep learning problems.
将更高阶的优化函数(如 Levenberg-Marquardt (LM))纳入其中,可以为深度学习问题提供更好的可推广解决方案。然而,这些更高阶的优化函数存在非常大的处理时间和训练复杂度,尤其是在训练数据集变得非常大的情况下,例如在多视图分类问题中,寻找全局最优解是一个非常昂贵的问题。为了解决这个问题,我们开发了一种针对具有 LM 的分类的解决方案,据我们所知,这是首次在多视图分类中实现了 hinge loss。hinge loss 允许神经网络比其他损失函数(如逻辑回归或平方损失率)更快地收敛并表现更好。我们通过实验各种不同复杂度和训练数据大小的多类分类挑战来证明我们的方法。实验结果显示了所达到的训练时间和准确率,突出了我们的方法在所有情况下的优势,特别是在训练时间有限的情况下。我们的论文提出了优化和损失函数之间的重要关系,以及这些关系如何影响深度学习问题。