Learning to Generate Diverse Data From a Temporal Perspective for Data-Free Quantization

被引:0
|
作者
Luo, Hui [1 ,2 ,3 ]
Zhang, Shuhai [4 ,5 ]
Zhuang, Zhuangwei [4 ,5 ]
Mai, Jiajie [4 ]
Tan, Mingkui [4 ,5 ]
Zhang, Jianlin [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Natl Key Lab Opt Field Manipulat Sci & Technol, Chengdu 610209, Peoples R China
[2] Chinese Acad Sci, Inst Opt & Elect, Key Lab Opt Engn, Chengdu 610209, Peoples R China
[3] Univ Chinese Acad Sci, Sch Elect Elect & Commun Engn, Beijing 100049, Peoples R China
[4] South China Univ Technol, Sch Software Engn, Guangzhou 510641, Peoples R China
[5] South China Univ Technol, Minist Educ, Key Lab Big Data & Intelligent Robot, Guangzhou 510641, Peoples R China
基金
中国国家自然科学基金;
关键词
Quantization (signal); Data models; Synthetic data; Generators; Computational modeling; Training; Analytical models; Model quantization; data-free quantization; generation process; synthetic data; linear interpolation; BINARY NEURAL-NETWORKS; ACCURATE;
D O I
10.1109/TCSVT.2024.3399311
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Model quantization is a prevalent method to compress and accelerate neural networks. Most existing quantization methods usually require access to real data to improve the performance of quantized models, which is often infeasible in some scenarios with privacy and security concerns. Recently, data-free quantization has been widely studied to solve the challenge of not having access to real data by generating synthetic data, among which generator-based data-free quantization is an important type. Previous generator-based methods focus on improving the performance of quantized models by optimizing the spatial distribution of synthetic data, while ignoring the study of changes in synthetic data from a temporal perspective. In this work, we reveal that generator-based data-free quantization methods usually suffer from the issue that synthetic data show homogeneity in the mid-to-late stages of the generation process due to the stagnation of the generator update, which hinders further improvement of the performance of quantized models. To solve the above issue, we propose introducing the discrepancy between the full-precision and quantized models as new supervision information to update the generator. Specifically, we propose a simple yet effective adversarial Gaussian-margin loss, which promotes continuous updating of the generator by adding more supervision information to the generator when the discrepancy between the full-precision and quantized models is small, thereby generating heterogeneous synthetic data. Moreover, to mitigate the homogeneity of the synthetic data further, we augment the synthetic data with linear interpolation. Our proposed method can also promote the performance of other generator-based data-free quantization methods. Extensive experimental results show that our proposed method achieves superior performances for various settings on data-free quantization, especially in ultra-low-bit settings, such as 3-bit.
引用
收藏
页码:9484 / 9498
页数:15
相关论文
共 50 条
  • [1] DQDG: Data-Free Quantization With Dual Generators for Keyword Spotting
    Xu, Xinbiao
    Ma, Liyan
    Jia, Fan
    Zeng, Tieyong
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 1540 - 1544
  • [2] Diverse Sample Generation: Pushing the Limit of Generative Data-Free Quantization
    Qin, Haotong
    Ding, Yifu
    Zhang, Xiangguo
    Wang, Jiakai
    Liu, Xianglong
    Lu, Jiwen
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (10) : 11689 - 11706
  • [3] FedGhost: Data-Free Model Poisoning Enhancement in Federated Learning
    Ma, Zhuoran
    Huang, Xinyi
    Wang, Zhuzhu
    Qin, Zhan
    Wang, Xiangyu
    Ma, Jianfeng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 2096 - 2108
  • [4] META-BNS FOR ADVERSARIAL DATA-FREE QUANTIZATION
    Fu, Siming
    Wang, Hualiang
    Cao, Yuchen
    Hu, Haoji
    Peng, Bo
    Tan, Wenming
    Ye, Tingqun
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 4038 - 4042
  • [5] Data-Free Quantization with Accurate Activation Clipping and Adaptive Batch Normalization
    He, Yefei
    Zhang, Luoming
    Wu, Weijia
    Zhou, Hong
    NEURAL PROCESSING LETTERS, 2023, 55 (08) : 10555 - 10568
  • [6] Data-Free Quantization with Accurate Activation Clipping and Adaptive Batch Normalization
    Yefei He
    Luoming Zhang
    Weijia Wu
    Hong Zhou
    Neural Processing Letters, 2023, 55 : 10555 - 10568
  • [7] A Category-Aware Curriculum Learning for Data-Free Knowledge Distillation
    Li, Xiufang
    Jiao, Licheng
    Sun, Qigong
    Liu, Fang
    Liu, Xu
    Li, Lingling
    Chen, Puhua
    Yang, Shuyuan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 9603 - 9618
  • [8] Frequency Domain Distillation for Data-Free Quantization of Vision Transformer
    Nan, Gongrui
    Chao, Fei
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VIII, 2024, 14432 : 205 - 216
  • [9] Patch Similarity Aware Data-Free Quantization for Vision Transformers
    Li, Zhikai
    Ma, Liping
    Chen, Mengjuan
    Xiao, Junrui
    Gu, Qingyi
    COMPUTER VISION, ECCV 2022, PT XI, 2022, 13671 : 154 - 170
  • [10] Data-Free Low-Bit Quantization for Remote Sensing Object Detection
    Zhang, Ruiyan
    Jiang, Xiujie
    An, Junshe
    Cui, Tianshu
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19