Reduced-Order Neural Network Synthesis With Robustness Guarantees

被引:4
作者
Drummond, Ross [1 ]
Turner, Matthew C. [2 ]
Duncan, Stephen R. [1 ]
机构
[1] Univ Oxford, Dept Engn Sci, Oxford OX1 3PJ, England
[2] Univ Southampton, Dept Elect & Comp Sci, Southampton SO17 1BJ, Hants, England
基金
英国工程与自然科学研究理事会;
关键词
Biological neural networks; Approximation error; Neural networks; Neurons; Robustness; Artificial neural networks; Machine learning algorithms; Neural network compression; reduced order systems; robustness; SYSTEMS; NORM;
D O I
10.1109/TNNLS.2022.3182893
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the wake of the explosive growth in smartphones and cyber-physical systems, there has been an accelerating shift in how data are generated away from centralized data toward on-device-generated data. In response, machine learning algorithms are being adapted to run locally on board, potentially hardware-limited, devices to improve user privacy, reduce latency, and be more energy efficient. However, our understanding of how these device-orientated algorithms behave and should be trained is still fairly limited. To address this issue, a method to automatically synthesize reduced-order neural networks (having fewer neurons) approximating the input-output mapping of a larger one is introduced. The reduced-order neural network's weights and biases are generated from a convex semidefinite program that minimizes the worst case approximation error with respect to the larger network. Worst case bounds for this approximation error are obtained and the approach can be applied to a wide variety of neural networks architectures. What differentiates the proposed approach to existing methods for generating small neural networks, e.g., pruning, is the inclusion of the worst case approximation error directly within the training cost function, which should add robustness to out-of-sample data points. Numerical examples highlight the potential of the proposed approach. The overriding goal of this article is to generalize recent results in the robustness analysis of neural networks to a robust synthesis problem for their weights and biases.
引用
收藏
页码:1182 / 1191
页数:10
相关论文
共 39 条
  • [1] Andersen Erling D., 2000, High Performance Optimization, P197, DOI [10.1007/978-1-4757-3216-0, DOI 10.1007/978-1-4757-3216-0, DOI 10.1007/978-1-4757-3216-0_8]
  • [2] Convergence in Networks With Counterclockwise Neural Dynamics
    Angeli, David
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (05): : 794 - 804
  • [3] [Anonymous], CISC VIS NETW IND GL
  • [4] [Anonymous], 2017, PROC INT C LEARN RE
  • [5] Stability analysis of discrete-time recurrent neural networks
    Barabanov, NE
    Prokhorov, DV
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (02): : 292 - 303
  • [6] Blalock D., 2020, P MACH LEARN SYST AU, P129
  • [7] Boyd S., 2004, Convex Optimization, DOI 10.1017/CBO9780511804441
  • [8] Bounds of the induced norm and model reduction errors for systems with repeated scalar nonlinearities
    Chu, YC
    Glover, K
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1999, 44 (03) : 471 - 483
  • [9] Courbariaux M, 2014, Training deep neural networks with low precision multiplications
  • [10] Dathathri S., 2020, P ADV NEUR INF PROC, V33, P1