Local to Global Learning: Gradually Adding Classes for Training Deep Neural Networks

被引:7
|
作者
Cheng, Hao [1 ]
Lian, Dongze [1 ]
Deng, Bowen [1 ]
Gao, Shenghua [1 ]
Tan, Tao [2 ]
Geng, Yanlin [3 ]
机构
[1] ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai 201210, Peoples R China
[2] Eindhoven Univ Technol, Ctr Anal, Dept Math & Comp Sci, Eindhoven, Netherlands
[3] Xidian Univ, State Key Lab ISN, Xian 710071, Shaanxi, Peoples R China
来源
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) | 2019年
关键词
D O I
10.1109/CVPR.2019.00488
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a new learning paradigm, Local to Global Learning (LGL), for Deep Neural Networks (DNNs) to improve the performance of classification problems. The core of LGL is to learn a DNN model from fewer categories (local) to more categories (global) gradually within the entire training set. LGL is most related to the Self-Paced Learning (SPL) algorithm but its formulation is different from SPL. SPL trains its data from simple to complex, while LGL from local to global. In this paper, we incorporate the idea of LGL into the learning objective of DNNs and explain why LGL works better from an information-theoretic perspective. Experiments on the toy data, CIFAR-I0, CIFAR-100, and ImageNet dataset show that LGL outperforms the baseline and SPL-based algorithms.
引用
收藏
页码:4743 / 4751
页数:9
相关论文
共 50 条
  • [1] Local Critic Training of Deep Neural Networks
    Lee, Hojung
    Lee, Jong-Seok
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [2] Local Critic Training for Model-Parallel Learning of Deep Neural Networks
    Lee, Hojung
    Hsieh, Cho-Jui
    Lee, Jong-Seok
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) : 4424 - 4436
  • [3] Local and Global Sparsity for Deep Learning Networks
    Zhang, Long
    Zhao, Jieyu
    Shi, Xiangfu
    Ye, Xulun
    IMAGE AND GRAPHICS (ICIG 2017), PT II, 2017, 10667 : 74 - 85
  • [4] Adding learning to cellular genetic algorithms for training recurrent neural networks
    Ku, KWC
    Mak, MW
    Siu, WC
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (02): : 239 - 252
  • [5] Training Spiking Neural Networks with Local Tandem Learning
    Yang, Qu
    Wu, Jibin
    Zhang, Malu
    Chua, Yansong
    Wang, Xinchao
    Li, Haizhou
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] Training Deep Neural Networks with Constrained Learning Parameters
    Date, Prasanna
    Carothers, Christopher D.
    Mitchell, John E.
    Hendler, James A.
    Magdon-Ismail, Malik
    2020 INTERNATIONAL CONFERENCE ON REBOOTING COMPUTING (ICRC 2020), 2020, : 107 - 115
  • [7] Supervised Local Training With Backward Links for Deep Neural Networks
    Guo W.
    Fouda M.E.
    Eltawil A.M.
    Salama K.N.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (02): : 854 - 867
  • [8] Semantics for Global and Local Interpretation of Deep Convolutional Neural Networks
    Gu, Jindong
    Zhao, Rui
    Tresp, Volker
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [9] Learning to Optimize with Unsupervised Learning: Training Deep Neural Networks for URLLC
    Sun, Chengjian
    Yang, Chenyang
    2019 IEEE 30TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS (PIMRC), 2019, : 451 - 457
  • [10] An active learning framework for adversarial training of deep neural networks
    Susmita Ghosh
    Abhiroop Chatterjee
    Lance Fiondella
    Neural Computing and Applications, 2025, 37 (9) : 6849 - 6876