Mixed-Precision Neural Network Quantization via Learned Layer-Wise Importance

被引:29
|
作者
Tang, Chen [1 ]
Ouyang, Kai [1 ]
Wang, Zhi [1 ,4 ]
Zhu, Yifei [2 ]
Ji, Wen [3 ,4 ]
Wang, Yaowei [4 ]
Zhu, Wenwu [1 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[3] Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
[4] Peng Cheng Lab, Shenzhen, Peoples R China
来源
基金
北京市自然科学基金;
关键词
Mixed-precision quantization; Model compression;
D O I
10.1007/978-3-031-20083-0_16
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The exponentially large discrete search space in mixed-precision quantization (MPQ) makes it hard to determine the optimal bit-width for each layer. Previous works usually resort to iterative search methods on the training set, which consume hundreds or even thousands of GPU-hours. In this study, we reveal that some unique learnable parameters in quantization, namely the scale factors in the quantizer, can serve as importance indicators of a layer, reflecting the contribution of that layer to the final accuracy at certain bit-widths. These importance indicators naturally perceive the numerical transformation during quantization-aware training, which can precisely provide quantization sensitivity metrics of layers. However, a deep network always contains hundreds of such indicators, and training them one by one would lead to an excessive time cost. To overcome this issue, we propose a joint training scheme that can obtain all indicators at once. It considerably speeds up the indicators training process by parallelizing the original sequential training processes. With these learned importance indicators, we formulate the MPQ search problem as a one-time integer linear programming (ILP) problem. That avoids the iterative search and significantly reduces search time without limiting the bit-width search space. For example, MPQ search on ResNet18 with our indicators takes only 0.06 s, which improves time efficiency exponentially compared to iterative search methods. Also, extensive experiments show our approach can achieve SOTA accuracy on ImageNet for far-ranging models with various constraints (e.g., BitOps, compress rate).
引用
收藏
页码:259 / 275
页数:17
相关论文
共 50 条
  • [41] One-Shot Model for Mixed-Precision Quantization
    Koryakovskiy, Ivan
    Yakovleva, Alexandra
    Buchnev, Valentin
    Isaev, Temur
    Odinokikh, Gleb
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 7939 - 7949
  • [42] Joint Optimization of Dimension Reduction and Mixed-Precision Quantization for Activation Compression of Neural Networks
    Tai, Yu-Shan
    Chang, Cheng-Yang
    Teng, Chieh-Fang
    Chen, Yi-Ta
    Wu, An-Yeu
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (11) : 4025 - 4037
  • [43] CSMPQ: Class Separability Based Mixed-Precision Quantization
    Wang, Mingkai
    Jin, Taisong
    Zhang, Miaohui
    Yu, Zhengtao
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT I, 2023, 14086 : 544 - 555
  • [44] Hardware-Centric AutoML for Mixed-Precision Quantization
    Kuan Wang
    Zhijian Liu
    Yujun Lin
    Ji Lin
    Song Han
    International Journal of Computer Vision, 2020, 128 : 2035 - 2048
  • [45] Hardware-Aware DNN Compression via Diverse Pruning and Mixed-Precision Quantization
    Balaskas, Konstantinos
    Karatzas, Andreas
    Sad, Christos
    Siozios, Kostas
    Anagnostopoulos, Iraklis
    Zervakis, Georgios
    Henkel, Jorg
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2024, 12 (04) : 1079 - 1092
  • [46] AMED: Automatic Mixed-Precision Quantization for Edge Devices
    Kimhi, Moshe
    Rozen, Tal
    Mendelson, Avi
    Baskin, Chaim
    MATHEMATICS, 2024, 12 (12)
  • [47] Hessian-based mixed-precision quantization with transition aware training for neural networks
    Huang, Zhiyong
    Han, Xiao
    Yu, Zhi
    Zhao, Yunlan
    Hou, Mingyang
    Hu, Shengdong
    NEURAL NETWORKS, 2025, 182
  • [48] Explicit Model Size Control and Relaxation via Smooth Regularization for Mixed-Precision Quantization
    Chikin, Vladimir
    Solodskikh, Kirill
    Zhelavskaya, Irina
    COMPUTER VISION, ECCV 2022, PT XII, 2022, 13672 : 1 - 16
  • [49] Data Quality-Aware Mixed-Precision Quantization via Hybrid Reinforcement Learning
    Wang, Yingchun
    Guo, Song
    Guo, Jingcai
    Zhang, Yuanhong
    Zhang, Weizhan
    Zheng, Qinghua
    Zhang, Jie
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 14
  • [50] Sample-Wise Dynamic Precision Quantization for Neural Network Acceleration
    Li, Bowen
    Xiong, Dongliang
    Huang, Kai
    Jiang, Xiaowen
    Yao, Hao
    Chen, Junjian
    Claesen, Luc
    IEICE ELECTRONICS EXPRESS, 2022, 19 (16):