Enhance the Hidden Structure of Deep Neural Networks by Double Laplacian Regularization

被引:0
|
作者
Fan, Yetian [1 ]
Yang, Wenyu [2 ]
Song, Bo [3 ]
Yan, Peilei [4 ]
Kang, Xiaoning [5 ,6 ]
机构
[1] Liaoning Univ, Sch Math & Stat, Shenyang 110036, Peoples R China
[2] Huazhong Agr Univ, Coll Sci, Wuhan 430070, Peoples R China
[3] Drexel Univ, Coll Comp & Informat, Philadelphia, PA 19104 USA
[4] Dalian Univ Technol, Fac Elect Informat & Elect Engn, Dalian 116024, Peoples R China
[5] Dongbei Univ Finance & Econ, Inst Supply Chain Analyt, Dalian 116025, Peoples R China
[6] Dongbei Univ Finance & Econ, Int Business Coll, Dalian 116025, Peoples R China
关键词
Index Terms-Graph regularization; deep neural networks; double Laplacian regularization; hidden structure; EXTREME LEARNING-MACHINE;
D O I
10.1109/TCSII.2023.3260248
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The Laplacian regularization has been widely used in neural networks due to its ability to improve generalization performance, which enforces adjacent samples with the same labels to share similar features. However, most existing methods only consider the global structure of the data with the same labels, but neglect samples in boundary areas with different labels. To address this limitation and improve performance, this brief proposes a novel regularization method that enhances the hidden structure of deep neural networks. Our proposed method imposes a double Laplacian regularization on the objective function and leverages full data information to capture its hidden structure in the manifold space. The double Laplacian regularization applies both attraction and repulsion effects on the hidden layer, which encourages the hidden features of instances with the same label to be closer, and forces those of different categories to be further away. Extensive experiments demonstrate the proposed method leads to significant improvements in accuracy on different types of deep neural networks.
引用
收藏
页码:3114 / 3118
页数:5
相关论文
共 50 条
  • [1] A Comparison of Regularization Techniques in Deep Neural Networks
    Nusrat, Ismoilov
    Jang, Sung-Bong
    SYMMETRY-BASEL, 2018, 10 (11):
  • [2] Towards Stochasticity of Regularization in Deep Neural Networks
    Sandjakoska, Ljubinka
    Bogdanova, Ana Madevska
    2018 14TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2018,
  • [3] LocalDrop: A Hybrid Regularization for Deep Neural Networks
    Lu, Ziqing
    Xu, Chang
    Du, Bo
    Ishida, Takashi
    Zhang, Lefei
    Sugiyama, Masashi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (07) : 3590 - 3601
  • [4] Bridgeout: Stochastic Bridge Regularization for Deep Neural Networks
    Khan, Najeeb
    Shah, Jawad
    Stavness, Ian
    IEEE ACCESS, 2018, 6 : 42961 - 42970
  • [5] A Numerical Approach for the Fractional Laplacian via Deep Neural Networks
    Valenzuela, Nicolas
    INTELLIGENT COMPUTING, VOL 2, 2024, 2024, 1017 : 187 - 219
  • [6] Hidden variability subspace learning for adaptation of deep neural networks
    Fernando, S.
    Sethu, V.
    Ambikairajah, E.
    ELECTRONICS LETTERS, 2018, 54 (03) : 173 - 175
  • [7] Invisible Backdoor Attacks on Deep Neural Networks Via Steganography and Regularization
    Li, Shaofeng
    Xue, Minhui
    Zhao, Benjamin
    Zhu, Haojin
    Zhang, Xinpeng
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (05) : 2088 - 2105
  • [8] Feature flow regularization: Improving structured sparsity in deep neural networks
    Wu, Yue
    Lan, Yuan
    Zhang, Luchan
    Xiang, Yang
    NEURAL NETWORKS, 2023, 161 : 598 - 613
  • [9] Learning regularization parameters of inverse problems via deep neural networks
    Afkham, Babak Maboudi
    Chung, Julianne
    Chung, Matthias
    INVERSE PROBLEMS, 2021, 37 (10)
  • [10] Comparison of Regularization Methods for ImageNet Classification with Deep Convolutional Neural Networks
    Smirnov, Evgeny A.
    Timoshenko, Denis M.
    Andrianov, Serge N.
    2ND AASRI CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND BIOINFORMATICS, 2014, 6 : 89 - 94