A Representer Theorem for Deep Neural Networks

被引:0
作者
Unser, Michael [1 ]
机构
[1] Ecole Polytech Fed Lausanne, Biomed Imaging Grp, CH-1015 Lausanne, Switzerland
基金
瑞士国家科学基金会;
关键词
splines; regularization; sparsity; learning; deep neural networks; activation functions; LINEAR INVERSE PROBLEMS; SPLINES; KERNELS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose to optimize the activation functions of a deep neural network by adding a corresponding functional regularization to the cost function. We justify the use of a second-order total-variation criterion. This allows us to derive a general representer theorem for deep neural networks that makes a direct connection with splines and sparsity. Specifically, we show that the optimal network configuration can be achieved with activation functions that are nonuniform linear splines with adaptive knots. The bottom line is that the action of each neuron is encoded by a spline whose parameters (including the number of knots) are optimized during the training procedure. The scheme results in a computational structure that is compatible with existing deep-ReLU, parametric ReLU, APL (adaptive piecewise-linear) and MaxOut architectures. It also suggests novel optimization challenges and makes an explicit link with l(1) minimization and sparsity-promoting techniques.
引用
收藏
页数:30
相关论文
共 50 条
  • [31] Visual Genealogy of Deep Neural Networks
    Wang, Qianwen
    Yuan, Jun
    Chen, Shuxin
    Su, Hang
    Qu, Huamin
    Liu, Shixia
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2020, 26 (11) : 3340 - 3352
  • [32] Image disambiguation with deep neural networks
    DeGuchy, Omar
    Ho, Alex
    Marcia, Roummel F.
    APPLICATIONS OF MACHINE LEARNING, 2019, 11139
  • [33] Risk sharing with deep neural networks
    Burzoni, M.
    Doldi, A.
    Compagnoni, E. Monzio
    QUANTITATIVE FINANCE, 2024, 24 (02) : 233 - 252
  • [34] Ottoman OCR with deep neural networks
    Dolek, Ishak
    Kurt, Atakan
    JOURNAL OF THE FACULTY OF ENGINEERING AND ARCHITECTURE OF GAZI UNIVERSITY, 2023, 38 (04): : 2579 - 2593
  • [35] Provable Repair of Deep Neural Networks
    Sotoudeh, Matthew
    Thakur, Aditya, V
    PROCEEDINGS OF THE 42ND ACM SIGPLAN INTERNATIONAL CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION (PLDI '21), 2021, : 588 - 603
  • [36] Deep neural networks - a developmental perspective
    Juang, Biing Hwang
    APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2016, 5
  • [37] Deep Neural Networks and PIDE Discretizations
    Bohn, Bastian
    Griebel, Michael
    Kannan, Dinesh
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2022, 4 (03): : 1145 - 1170
  • [38] A Study on Deep Neural Networks Framework
    Huang Yi
    Duan Xiusheng
    Sun Shiyu
    Chen Zhigang
    PROCEEDINGS OF 2016 IEEE ADVANCED INFORMATION MANAGEMENT, COMMUNICATES, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (IMCEC 2016), 2016, : 1519 - 1522
  • [39] Deep neural networks for bot detection
    Kudugunta, Sneha
    Ferrara, Emilio
    INFORMATION SCIENCES, 2018, 467 : 312 - 322
  • [40] Deep limits of residual neural networks
    Thorpe, Matthew
    van Gennip, Yves
    RESEARCH IN THE MATHEMATICAL SCIENCES, 2023, 10 (01)