Learning to Generate Diverse Data From a Temporal Perspective for Data-Free Quantization

被引:0
|
作者
Luo, Hui [1 ,2 ,3 ]
Zhang, Shuhai [4 ,5 ]
Zhuang, Zhuangwei [4 ,5 ]
Mai, Jiajie [4 ]
Tan, Mingkui [4 ,5 ]
Zhang, Jianlin [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Natl Key Lab Opt Field Manipulat Sci & Technol, Chengdu 610209, Peoples R China
[2] Chinese Acad Sci, Inst Opt & Elect, Key Lab Opt Engn, Chengdu 610209, Peoples R China
[3] Univ Chinese Acad Sci, Sch Elect Elect & Commun Engn, Beijing 100049, Peoples R China
[4] South China Univ Technol, Sch Software Engn, Guangzhou 510641, Peoples R China
[5] South China Univ Technol, Minist Educ, Key Lab Big Data & Intelligent Robot, Guangzhou 510641, Peoples R China
基金
中国国家自然科学基金;
关键词
Quantization (signal); Data models; Synthetic data; Generators; Computational modeling; Training; Analytical models; Model quantization; data-free quantization; generation process; synthetic data; linear interpolation; BINARY NEURAL-NETWORKS; ACCURATE;
D O I
10.1109/TCSVT.2024.3399311
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Model quantization is a prevalent method to compress and accelerate neural networks. Most existing quantization methods usually require access to real data to improve the performance of quantized models, which is often infeasible in some scenarios with privacy and security concerns. Recently, data-free quantization has been widely studied to solve the challenge of not having access to real data by generating synthetic data, among which generator-based data-free quantization is an important type. Previous generator-based methods focus on improving the performance of quantized models by optimizing the spatial distribution of synthetic data, while ignoring the study of changes in synthetic data from a temporal perspective. In this work, we reveal that generator-based data-free quantization methods usually suffer from the issue that synthetic data show homogeneity in the mid-to-late stages of the generation process due to the stagnation of the generator update, which hinders further improvement of the performance of quantized models. To solve the above issue, we propose introducing the discrepancy between the full-precision and quantized models as new supervision information to update the generator. Specifically, we propose a simple yet effective adversarial Gaussian-margin loss, which promotes continuous updating of the generator by adding more supervision information to the generator when the discrepancy between the full-precision and quantized models is small, thereby generating heterogeneous synthetic data. Moreover, to mitigate the homogeneity of the synthetic data further, we augment the synthetic data with linear interpolation. Our proposed method can also promote the performance of other generator-based data-free quantization methods. Extensive experimental results show that our proposed method achieves superior performances for various settings on data-free quantization, especially in ultra-low-bit settings, such as 3-bit.
引用
收藏
页码:9484 / 9498
页数:15
相关论文
共 50 条
  • [21] ZERO-SHOT LEARNING OF A CONDITIONAL GENERATIVE ADVERSARIAL NETWORK FOR DATA-FREE NETWORK QUANTIZATION
    Choi, Yoojin
    El-Khamy, Mostafa
    Lee, Jungwon
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3552 - 3556
  • [22] Towards Feature Distribution Alignment and Diversity Enhancement for Data-Free Quantization
    Gao, Yangcheng
    Zhang, Zhao
    Hong, Richang
    Zhang, Haijun
    Fan, Jicong
    Yan, Shuicheng
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 141 - 150
  • [23] SPIQ: Data-Free Per-Channel Static Input Quantization
    Yvinec, Edouard
    Dapogny, Arnaud
    Cord, Matthieu
    Bailly, Kevin
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 3858 - 3867
  • [24] Data-Free Quantization with Accurate Activation Clipping and Adaptive Batch Normalization
    He, Yefei
    Zhang, Luoming
    Wu, Weijia
    Zhou, Hong
    NEURAL PROCESSING LETTERS, 2023, 55 (08) : 10555 - 10568
  • [25] Causal-DFQ: Causality Guided Data-free Network Quantization
    Shang, Yuzhang
    Xu, Bingxin
    Liu, Gaowen
    Kompella, Ramana Rao
    Yan, Yan
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 17391 - 17400
  • [26] ACQ: Improving generative data-free quantization via attention correction
    Li, Jixing
    Guo, Xiaozhou
    Dai, Benzhe
    Gong, Guoliang
    Jin, Min
    Chen, Gang
    Mao, Wenyu
    Lu, Huaxiang
    PATTERN RECOGNITION, 2024, 152
  • [27] Data-Free Quantization with Accurate Activation Clipping and Adaptive Batch Normalization
    Yefei He
    Luoming Zhang
    Weijia Wu
    Hong Zhou
    Neural Processing Letters, 2023, 55 : 10555 - 10568
  • [28] FREE: Faster and Better Data-Free Meta-Learning
    Wei, Yongxian
    Hui, Zixuan
    Wang, Zhenyi
    Shen, Li
    Yuan, Chun
    Tao, Dacheng
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 23273 - 23282
  • [29] Data-Free Learning of Reduced-Order Kinematics
    Sharp, Nicholas
    Romero, Cristian
    Jacobson, Alec
    Vouga, Etienne
    Kry, Paul G.
    Levin, David I. W.
    Solomon, Justin
    PROCEEDINGS OF SIGGRAPH 2023 CONFERENCE PAPERS, SIGGRAPH 2023, 2023,
  • [30] CLAMP-ViT: Contrastive Data-Free Learning for Adaptive Post-training Quantization of ViTs
    Ramachandran, Akshat
    Kundu, Souvik
    Krishna, Tushar
    COMPUTER VISION - ECCV 2024, PT LXVII, 2025, 15125 : 307 - 325