A Category-Aware Curriculum Learning for Data-Free Knowledge Distillation

被引:0
|
作者
Li, Xiufang [1 ]
Jiao, Licheng [1 ]
Sun, Qigong [2 ,3 ]
Liu, Fang [1 ]
Liu, Xu [1 ]
Li, Lingling [1 ]
Chen, Puhua [1 ]
Yang, Shuyuan [1 ]
机构
[1] Xidian Univ, Int Res Ctr Intelligent Percept & Computat, Key Lab Intelligent Percept & Image Understanding, Minist Educ, Xian 710071, Peoples R China
[2] SenseTime Res, Shanghai 200032, Peoples R China
[3] Shanghai AI Lab, Shanghai 200032, Peoples R China
关键词
Generators; Training; Knowledge engineering; Data models; Training data; Task analysis; Monitoring; Data generation; knowledge distillation; category-aware; curriculum learning; image classification;
D O I
10.1109/TMM.2024.3395844
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Constructing effective proxy data is one of the core challenges in data-free knowledge distillation. The existing models ignore the influence of the category entanglement of the generated data on the distillation. To alleviate this issue, imitating the human learning process, a new category-aware curriculum learning mechanism is proposed in this paper to perform data-free knowledge distillation, called CCL-D. The main ideology of this category-aware curriculum learning mechanism is to provide a new learning mode for data generation and network training, which enables the model to realize the knowledge distillation process from easy to difficult through automated curriculum learning. In this novel learning mechanism, a category-aware monitoring module is proposed to constrain the category attribute of generated data. Based on this monitoring module, the curriculum learning process for data generation and network training is designed and applied. Initially, the generator is guided to obtain new data with clear category features. The utilization of data with apparent category features is easy for student network training, and it enables the student network to learn clear and significant category features at the early training stage. Subsequently, the generator is guided to generate data with category entanglement. Utilizing these new data with category entanglement problems can improve the recognition ability of the student network to interclass interference and enhance network robustness. The effectiveness of the CCL-D is verified on the six benchmark experimental datasets (MNIST, CIFAR-10, CIFAR-100, SVHN, Caltech-101, Tiny-Imagenet).
引用
收藏
页码:9603 / 9618
页数:16
相关论文
共 50 条
  • [1] FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning
    Zhu, Yinlin
    Lie, Xunkai
    Wu, Zhengyu
    Wu, Di
    Hu, Miao
    Li, Rong-Hua
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 5716 - 5724
  • [2] Variational Data-Free Knowledge Distillation for Continual Learning
    Li, Xiaorong
    Wang, Shipeng
    Sun, Jian
    Xu, Zongben
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (10) : 12618 - 12634
  • [3] KNOWLEDGE DISTILLATION WITH CATEGORY-AWARE ATTENTION AND DISCRIMINANT LOGIT LOSSES
    Jiang, Lei
    Zhou, Wengang
    Li, Houqiang
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 1792 - 1797
  • [4] Data-Free Knowledge Distillation for Heterogeneous Federated Learning
    Zhu, Zhuangdi
    Hong, Junyuan
    Zhou, Jiayu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [5] Conditional generative data-free knowledge distillation
    Yu, Xinyi
    Yan, Ling
    Yang, Yang
    Zhou, Libo
    Ou, Linlin
    IMAGE AND VISION COMPUTING, 2023, 131
  • [6] Data-free Knowledge Distillation for Object Detection
    Chawla, Akshay
    Yin, Hongxu
    Molchanov, Pavlo
    Alvarez, Jose
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 3288 - 3297
  • [7] Dynamic data-free knowledge distillation by easy-to-hard learning strategy
    Li, Jingru
    Zhou, Sheng
    Li, Liangcheng
    Wang, Haishuai
    Bu, Jiajun
    Yu, Zhi
    INFORMATION SCIENCES, 2023, 642
  • [8] Data-Free Network Quantization With Adversarial Knowledge Distillation
    Choi, Yoojin
    Choi, Jihwan
    El-Khamy, Mostafa
    Lee, Jungwon
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 3047 - 3057
  • [9] ROBUSTNESS AND DIVERSITY SEEKING DATA-FREE KNOWLEDGE DISTILLATION
    Han, Pengchao
    Park, Jihong
    Wang, Shiqiang
    Liu, Yejun
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2740 - 2744
  • [10] Data-free knowledge distillation in neural networks for regression
    Kang, Myeonginn
    Kang, Seokho
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 175