Suppr超能文献

Recursion Newton-Like Algorithm for l-ReLU Deep Neural Networks.

作者信息

Zhang Hui, Yuan Zhengpeng, Xiu Naihua

出版信息

IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5882-5896. doi: 10.1109/TNNLS.2021.3131406. Epub 2023 Sep 1.

Abstract

Rectified linear unit (ReLU) deep neural network (DNN) is a classical model in deep learning and has achieved great success in many applications. However, this model is characterized by too many parameters, which not only requires huge memory but also imposes unbearable computation burden. The l regularization has become a useful technique to cope with this trouble. In this article, we design a recursion Newton-like algorithm (RNLA) to simultaneously train and compress ReLU-DNNs with l regularization. First, we reformulate the multicomposite training model into a constrained optimization problem by explicitly introducing the network nodes as the variables of the optimization. Based on the penalty function of the reformulation, we obtain two types of minimization subproblems. Second, we build the first-order optimality conditions for acquiring P-stationary points of the two subproblems, and these P-stationary points enable us to equivalently derive two sequences of stationary equations, which are piecewise linear matrix equations. We solve these equations by the column Newton-like method in group sparse subspace with lower computational scale and cost. Finally, numerical experiments are conducted on real datasets, and the results demonstrate that the proposed method RNLA is effective and applicable.

摘要

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验