Department of Mathematics, National University of Singapore, Singapore.
Neural Netw. 2019 Nov;119:74-84. doi: 10.1016/j.neunet.2019.07.011. Epub 2019 Aug 2.
Given a function dictionary D and an approximation budget N∈N, nonlinear approximation seeks the linear combination of the best N terms [Formula: see text] to approximate a given function f with the minimum approximation error [Formula: see text] Motivated by recent success of deep learning, we propose dictionaries with functions in a form of compositions, i.e., [Formula: see text] for all T∈D, and implement T using ReLU feed-forward neural networks (FNNs) with L hidden layers. We further quantify the improvement of the best N-term approximation rate in terms of N when L is increased from 1 to 2 or 3 to show the power of compositions. In the case when L>3, our analysis shows that increasing L cannot improve the approximation rate in terms of N. In particular, for any function f on [0,1], regardless of its smoothness and even the continuity, if f can be approximated using a dictionary when L=1 with the best N-term approximation rate ε=O(N), we show that dictionaries with L=2 can improve the best N-term approximation rate to ε=O(N). We also show that for Hölder continuous functions of order α on [0,1], the application of a dictionary with L=3 in nonlinear approximation can achieve an essentially tight best N-term approximation rate ε=O(N). Finally, we show that dictionaries consisting of wide FNNs with a few hidden layers are more attractive in terms of computational efficiency than dictionaries with narrow and very deep FNNs for approximating Hölder continuous functions if the number of computer cores is larger than N in parallel computing.
给定一个函数字典 D 和逼近预算 N∈N,非线性逼近寻求最佳 N 项的线性组合 [公式:见正文] 以最小逼近误差 [公式:见正文] 来逼近给定函数 f。受深度学习最近成功的启发,我们提出了以组合形式的函数字典,即 [公式:见正文] 对于所有 T∈D,并使用具有 L 个隐藏层的 ReLU 前馈神经网络 (FNN) 实现 T。我们进一步量化了当 L 从 1 增加到 2 或从 3 增加到 L 时最佳 N 项逼近率的改进,以显示组合的优势。当 L>3 时,我们的分析表明,增加 L 不能提高逼近率。特别是,对于 [0,1] 上的任何函数 f,无论其平滑度甚至连续性如何,如果当 L=1 时可以使用字典以最佳 N 项逼近率 ε=O(N) 来逼近 f,我们表明 L=2 的字典可以将最佳 N 项逼近率提高到 ε=O(N)。我们还表明,对于 [0,1] 上的 α 阶 Hölder 连续函数,在非线性逼近中应用具有 L=3 的字典可以实现最佳 N 项逼近率 ε=O(N)。最后,如果在并行计算中每个核的数量大于 N,则相对于具有窄和极深 FNN 的字典,具有几个隐藏层的宽 FNN 组成的字典在计算效率方面对于逼近 Hölder 连续函数更具吸引力。