On robustness of neural ODEs image classifiers

被引:10
|
作者
Cui, Wenjun [1 ]
Zhang, Honglei [1 ]
Chu, Haoyu [1 ]
Hu, Pipi [2 ]
Li, Yidong [1 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing 100044, Peoples R China
[2] Microsoft Res AI4Sci, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Neural ODEs; Activation functions; Dynamical behavior; Robustness;
D O I
10.1016/j.ins.2023.03.049
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Neural Ordinary Differential Equations (Neural ODEs), as a family of novel deep models, delicately link conventional neural networks and dynamical systems, which bridges the gap between theory and practice. However, they have not made substantial progress on activation functions, and ReLU is always utilized by default. Moreover, the dynamical behavior existing in them becomes more unclear and complicated as training progresses. Fortunately, existing studies have shown that activation functions are essential for Neural ODEs in governing intrinsic dynamics. Motivated by a family of weight functions used to enhance the stability of dynamical systems, we introduce a new activation function named half-Swish to match Neural ODEs. Besides, we explore the effect of evolution time and batch size on Neural ODEs, respectively. Experiments show that our model consistently outperforms Neural ODEs with basic activation functions on robustness both against stochastic noise images and adversarial examples across Fashion-MNIST, CIFAR-10, and CIFAR-100 datasets, which strongly validates the applicability of half-Swish and suggests that half-Swish function plays a positive role in regularizing the dynamic behavior to enhance stability. Meanwhile, our work theoretically provides a prospective framework to choose appropriate activation functions to match neural differential equations.
引用
收藏
页码:576 / 593
页数:18
相关论文
共 50 条
  • [21] Interplay between depth and width for interpolation in neural ODEs
    Alvarez-Lopez, Antonio
    Slimane, Arselane Hadj
    Zuazua, Enrique
    NEURAL NETWORKS, 2024, 180
  • [22] Enhancing Convolutional Neural Network Robustness Against Image Noise via an Artificial Visual System
    Li, Bin
    Todo, Yuki
    Tao, Sichen
    Tang, Cheng
    Wang, Yu
    MATHEMATICS, 2025, 13 (01)
  • [23] Improving robustness with image filtering
    Terzi, Matteo
    Carletti, Mattia
    Susto, Gian Antonio
    NEUROCOMPUTING, 2024, 596
  • [24] EXPLORING THE CONNECTION BETWEEN NEURON COVERAGE AND ADVERSARIAL ROBUSTNESS IN DNN CLASSIFIERS
    Piat, William
    Fadili, Jalal
    Jurie, Frederic
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 745 - 749
  • [25] Are ML Image Classifiers Robust to Medical Image Quality Degradation?
    Chuprov, Sergei
    Satam, Akshaya Nandkishor
    Reznik, Leon
    2022 IEEE WESTERN NEW YORK IMAGE AND SIGNAL PROCESSING WORKSHOP (WNYISPW), 2022,
  • [26] The geometry of robustness in spiking neural networks
    Calaim, Nuno
    Dehmelt, Florian A.
    Goncalves, Pedro J.
    Machens, Christian K.
    ELIFE, 2022, 11
  • [27] ε-Weakened Robustness of Deep Neural Networks
    Huang, Pei
    Yang, Yuting
    Liu, Minghao
    Jia, Fuqi
    Ma, Feifei
    Zhang, Jian
    PROCEEDINGS OF THE 31ST ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2022, 2022, : 126 - 138
  • [28] Stochasticity and robustness in spiking neural networks
    Olin-Ammentorp, Wilkie
    Beckmann, Karsten
    Schuman, Catherine D.
    Plank, James S.
    Cady, Nathaniel C.
    NEUROCOMPUTING, 2021, 419 : 23 - 36
  • [29] On the Robustness and Security of Digital Image Watermarking
    Nyeem, Hussain
    Boles, Wageeh
    Boyd, Colin
    2012 INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION (ICIEV), 2012, : 1136 - 1141
  • [30] Quantitative Robustness Analysis of Neural Networks
    Downing, Mara
    PROCEEDINGS OF THE 32ND ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2023, 2023, : 1527 - 1531