Enhance the Hidden Structure of Deep Neural Networks by Double Laplacian Regularization

被引:0
作者
Fan, Yetian [1 ]
Yang, Wenyu [2 ]
Song, Bo [3 ]
Yan, Peilei [4 ]
Kang, Xiaoning [5 ,6 ]
机构
[1] Liaoning Univ, Sch Math & Stat, Shenyang 110036, Peoples R China
[2] Huazhong Agr Univ, Coll Sci, Wuhan 430070, Peoples R China
[3] Drexel Univ, Coll Comp & Informat, Philadelphia, PA 19104 USA
[4] Dalian Univ Technol, Fac Elect Informat & Elect Engn, Dalian 116024, Peoples R China
[5] Dongbei Univ Finance & Econ, Inst Supply Chain Analyt, Dalian 116025, Peoples R China
[6] Dongbei Univ Finance & Econ, Int Business Coll, Dalian 116025, Peoples R China
关键词
Index Terms-Graph regularization; deep neural networks; double Laplacian regularization; hidden structure; EXTREME LEARNING-MACHINE;
D O I
10.1109/TCSII.2023.3260248
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The Laplacian regularization has been widely used in neural networks due to its ability to improve generalization performance, which enforces adjacent samples with the same labels to share similar features. However, most existing methods only consider the global structure of the data with the same labels, but neglect samples in boundary areas with different labels. To address this limitation and improve performance, this brief proposes a novel regularization method that enhances the hidden structure of deep neural networks. Our proposed method imposes a double Laplacian regularization on the objective function and leverages full data information to capture its hidden structure in the manifold space. The double Laplacian regularization applies both attraction and repulsion effects on the hidden layer, which encourages the hidden features of instances with the same label to be closer, and forces those of different categories to be further away. Extensive experiments demonstrate the proposed method leads to significant improvements in accuracy on different types of deep neural networks.
引用
收藏
页码:3114 / 3118
页数:5
相关论文
共 50 条
  • [41] Neural Networks Regularization With Graph-Based Local Resampling
    Assis, Alex D.
    Torres, Luiz C. B.
    Araujo, Lourencro R. G.
    Hanriot, Vitor M.
    Braga, Antonio P.
    IEEE ACCESS, 2021, 9 : 50727 - 50737
  • [42] AN IDEAL HIDDEN-ACTIVATION MASK FOR DEEP NEURAL NETWORKS BASED NOISE-ROBUST SPEECH RECOGNITION
    Li, Bo
    Sim, Khe Chai
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [43] Fast Sparse Deep Neural Networks: Theory and Performance Analysis
    Zhao, Jin
    Jiao, Licheng
    IEEE ACCESS, 2019, 7 : 74040 - 74055
  • [44] Dynamic Slicing for Deep Neural Networks
    Zhang, Ziqi
    Li, Yuanchun
    Guo, Yao
    Chen, Xiangqun
    Liu, Yunxin
    PROCEEDINGS OF THE 28TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE '20), 2020, : 838 - 850
  • [45] A Representer Theorem for Deep Neural Networks
    Unser, Michael
    JOURNAL OF MACHINE LEARNING RESEARCH, 2019, 20
  • [46] Conceptual alignment deep neural networks
    Dai, Yinglong
    Wang, Guojun
    Li, Kuan-Ching
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2018, 34 (03) : 1631 - 1642
  • [47] Comparison of Regularization Constraints in Deep Neural Network based Speaker Adaptation
    Shen, Peng
    Lu, Xugang
    Kawai, Hisashi
    2016 10TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2016,
  • [48] The Representation of Speech in Deep Neural Networks
    Scharenborg, Odette
    van der Gouw, Nikki
    Larson, Martha
    Marchiori, Elena
    MULTIMEDIA MODELING, MMM 2019, PT II, 2019, 11296 : 194 - 205
  • [49] Temporal Alignment for Deep Neural Networks
    Lin, Payton
    Lyu, Dau-Cheng
    Chang, Yun-Fan
    Tsao, Yu
    2015 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2015, : 108 - 112
  • [50] Embedding Watermarks into Deep Neural Networks
    Uchida, Yusuke
    Nagai, Yuki
    Sakazawa, Shigeyuki
    Satoh, Shin'ichi
    PROCEEDINGS OF THE 2017 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR'17), 2017, : 274 - 282