Avant Trevor, Morgansen Kristi A
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):13902-13913. doi: 10.1109/TNNLS.2023.3273228. Epub 2024 Oct 7.
In this article, we determine analytical upper bounds on the local Lipschitz constants of feedforward neural networks with rectified linear unit (ReLU) activation functions. We do so by deriving Lipschitz constants and bounds for ReLU, affine-ReLU, and max pooling functions, and combining the results to determine a network-wide bound. Our method uses several insights to obtain tight bounds, such as keeping track of the zero elements of each layer, and analyzing the composition of affine and ReLU functions. Furthermore, we employ a careful computational approach which allows us to apply our method to large networks such as AlexNet and VGG-16. We present several examples using different networks, which show how our local Lipschitz bounds are tighter than the global Lipschitz bounds. We also show how our method can be applied to provide adversarial bounds for classification networks. These results show that our method produces the largest known bounds on minimum adversarial perturbations for large networks such as AlexNet and VGG-16.
在本文中,我们确定了具有整流线性单元(ReLU)激活函数的前馈神经网络局部利普希茨常数的解析上界。我们通过推导ReLU、仿射ReLU和最大池化函数的利普希茨常数及界,并将结果结合起来以确定网络范围的界来做到这一点。我们的方法利用了几个见解来获得精确的界,比如跟踪每一层的零元素,并分析仿射和ReLU函数的组合。此外,我们采用了一种谨慎的计算方法,这使我们能够将我们的方法应用于诸如AlexNet和VGG - 16这样的大型网络。我们给出了使用不同网络的几个例子,展示了我们的局部利普希茨界如何比全局利普希茨界更紧。我们还展示了我们的方法如何可用于为分类网络提供对抗界。这些结果表明,我们的方法为诸如AlexNet和VGG - 16这样的大型网络的最小对抗扰动产生了已知的最大界。