Suppr超能文献

Reduced-Order Neural Network Synthesis With Robustness Guarantees.

作者信息

Drummond Ross, Turner Matthew C, Duncan Stephen R

出版信息

IEEE Trans Neural Netw Learn Syst. 2022 Jun 23;PP. doi: 10.1109/TNNLS.2022.3182893.

Abstract

In the wake of the explosive growth in smartphones and cyber-physical systems, there has been an accelerating shift in how data are generated away from centralized data toward on-device-generated data. In response, machine learning algorithms are being adapted to run locally on board, potentially hardware-limited, devices to improve user privacy, reduce latency, and be more energy efficient. However, our understanding of how these device-orientated algorithms behave and should be trained is still fairly limited. To address this issue, a method to automatically synthesize reduced-order neural networks (having fewer neurons) approximating the input-output mapping of a larger one is introduced. The reduced-order neural network's weights and biases are generated from a convex semidefinite program that minimizes the worst case approximation error with respect to the larger network. Worst case bounds for this approximation error are obtained and the approach can be applied to a wide variety of neural networks architectures. What differentiates the proposed approach to existing methods for generating small neural networks, e.g., pruning, is the inclusion of the worst case approximation error directly within the training cost function, which should add robustness to out-of-sample data points. Numerical examples highlight the potential of the proposed approach. The overriding goal of this article is to generalize recent results in the robustness analysis of neural networks to a robust synthesis problem for their weights and biases.

摘要

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验