Learning to Generate Diverse Data From a Temporal Perspective for Data-Free Quantization

被引:0
|
作者
Luo, Hui [1 ,2 ,3 ]
Zhang, Shuhai [4 ,5 ]
Zhuang, Zhuangwei [4 ,5 ]
Mai, Jiajie [4 ]
Tan, Mingkui [4 ,5 ]
Zhang, Jianlin [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Natl Key Lab Opt Field Manipulat Sci & Technol, Chengdu 610209, Peoples R China
[2] Chinese Acad Sci, Inst Opt & Elect, Key Lab Opt Engn, Chengdu 610209, Peoples R China
[3] Univ Chinese Acad Sci, Sch Elect Elect & Commun Engn, Beijing 100049, Peoples R China
[4] South China Univ Technol, Sch Software Engn, Guangzhou 510641, Peoples R China
[5] South China Univ Technol, Minist Educ, Key Lab Big Data & Intelligent Robot, Guangzhou 510641, Peoples R China
基金
中国国家自然科学基金;
关键词
Quantization (signal); Data models; Synthetic data; Generators; Computational modeling; Training; Analytical models; Model quantization; data-free quantization; generation process; synthetic data; linear interpolation; BINARY NEURAL-NETWORKS; ACCURATE;
D O I
10.1109/TCSVT.2024.3399311
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Model quantization is a prevalent method to compress and accelerate neural networks. Most existing quantization methods usually require access to real data to improve the performance of quantized models, which is often infeasible in some scenarios with privacy and security concerns. Recently, data-free quantization has been widely studied to solve the challenge of not having access to real data by generating synthetic data, among which generator-based data-free quantization is an important type. Previous generator-based methods focus on improving the performance of quantized models by optimizing the spatial distribution of synthetic data, while ignoring the study of changes in synthetic data from a temporal perspective. In this work, we reveal that generator-based data-free quantization methods usually suffer from the issue that synthetic data show homogeneity in the mid-to-late stages of the generation process due to the stagnation of the generator update, which hinders further improvement of the performance of quantized models. To solve the above issue, we propose introducing the discrepancy between the full-precision and quantized models as new supervision information to update the generator. Specifically, we propose a simple yet effective adversarial Gaussian-margin loss, which promotes continuous updating of the generator by adding more supervision information to the generator when the discrepancy between the full-precision and quantized models is small, thereby generating heterogeneous synthetic data. Moreover, to mitigate the homogeneity of the synthetic data further, we augment the synthetic data with linear interpolation. Our proposed method can also promote the performance of other generator-based data-free quantization methods. Extensive experimental results show that our proposed method achieves superior performances for various settings on data-free quantization, especially in ultra-low-bit settings, such as 3-bit.
引用
收藏
页码:9484 / 9498
页数:15
相关论文
共 50 条
  • [41] DFRD: Data-Free Robustness Distillation for Heterogeneous Federated Learning
    Luo, Kangyang
    Wang, Shuai
    Fu, Yexuan
    Li, Xiang
    Lan, Yunshi
    Gao, Ming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [42] A novel data-free continual learning method with contrastive reversion
    Wu, Chu
    Xie, Runshan
    Wang, Shitong
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (02) : 505 - 518
  • [43] FedGhost: Data-Free Model Poisoning Enhancement in Federated Learning
    Ma, Zhuoran
    Huang, Xinyi
    Wang, Zhuzhu
    Qin, Zhan
    Wang, Xiangyu
    Ma, Jianfeng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 2096 - 2108
  • [44] Latent Coreset Sampling based Data-Free Continual Learning
    Wang, Zhuoyi
    Li, Dingcheng
    Li, Ping
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 2078 - 2087
  • [45] A novel data-free continual learning method with contrastive reversion
    Chu Wu
    Runshan Xie
    Shitong Wang
    International Journal of Machine Learning and Cybernetics, 2024, 15 : 505 - 518
  • [46] Maintaining Distinction and Fairness in Data-Free Class Incremental Learning
    Feng, Yu
    Zhao, Pengcheng
    Guo, Yadong
    Zhao, Xue
    Hao, Wangli
    Zhao, Ran
    Li, Fuzhong
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [47] DENSE: Data-Free One-Shot Federated Learning
    Zhang, Jie
    Chen, Chen
    Li, Bo
    Lyu, Lingjuan
    Wu, Shuang
    Ding, Shouhong
    Shen, Chunhua
    Wu, Chao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [48] PSQ: An Automatic Search Framework for Data-Free Quantization on PIM-based Architecture
    Liu, Fangxin
    Yang, Ning
    Jiang, Li
    2023 IEEE 41ST INTERNATIONAL CONFERENCE ON COMPUTER DESIGN, ICCD, 2023, : 507 - 514
  • [49] From Data to Optimization: Data-Free Deep Incremental Hashing With Data Disambiguation and Adaptive Proxies
    Su, Qinghang
    Wu, Dayan
    Wu, Chenming
    Li, Bo
    Wang, Weiping
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 6576 - 6589
  • [50] Network-light not data-free
    French, Katherine M.
    Riley, Steven
    Garnett, Geoff P.
    SEXUALLY TRANSMITTED DISEASES, 2007, 34 (01) : 57 - 58