Mixed-Precision Neural Network Quantization via Learned Layer-Wise Importance

被引:29
|
作者
Tang, Chen [1 ]
Ouyang, Kai [1 ]
Wang, Zhi [1 ,4 ]
Zhu, Yifei [2 ]
Ji, Wen [3 ,4 ]
Wang, Yaowei [4 ]
Zhu, Wenwu [1 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[3] Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
[4] Peng Cheng Lab, Shenzhen, Peoples R China
来源
基金
北京市自然科学基金;
关键词
Mixed-precision quantization; Model compression;
D O I
10.1007/978-3-031-20083-0_16
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The exponentially large discrete search space in mixed-precision quantization (MPQ) makes it hard to determine the optimal bit-width for each layer. Previous works usually resort to iterative search methods on the training set, which consume hundreds or even thousands of GPU-hours. In this study, we reveal that some unique learnable parameters in quantization, namely the scale factors in the quantizer, can serve as importance indicators of a layer, reflecting the contribution of that layer to the final accuracy at certain bit-widths. These importance indicators naturally perceive the numerical transformation during quantization-aware training, which can precisely provide quantization sensitivity metrics of layers. However, a deep network always contains hundreds of such indicators, and training them one by one would lead to an excessive time cost. To overcome this issue, we propose a joint training scheme that can obtain all indicators at once. It considerably speeds up the indicators training process by parallelizing the original sequential training processes. With these learned importance indicators, we formulate the MPQ search problem as a one-time integer linear programming (ILP) problem. That avoids the iterative search and significantly reduces search time without limiting the bit-width search space. For example, MPQ search on ResNet18 with our indicators takes only 0.06 s, which improves time efficiency exponentially compared to iterative search methods. Also, extensive experiments show our approach can achieve SOTA accuracy on ImageNet for far-ranging models with various constraints (e.g., BitOps, compress rate).
引用
收藏
页码:259 / 275
页数:17
相关论文
共 50 条
  • [31] Activation Distribution-based Layer-wise Quantization for Convolutional Neural Networks
    Ki, Subin
    Kim, Hyun
    2022 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2022,
  • [32] Automatic Mixed-Precision Quantization Search of BERT
    Zhao, Changsheng
    Hua, Ting
    Shen, Yilin
    Lou, Qian
    Jin, Hongxia
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 3427 - 3433
  • [33] A Novel Mixed-Precision Quantization Approach for CNNs
    Wu, Dan
    Wang, Yanzhi
    Fei, Yuqi
    Gao, Guowang
    IEEE ACCESS, 2025, 13 : 49309 - 49319
  • [34] Activation Density based Mixed-Precision Quantization for Energy Efficient Neural Networks
    Vasquez, Karina
    Venkatesha, Yeshwanth
    Bhattacharjee, Abhiroop
    Moitra, Abhishek
    Panda, Priyadarshini
    PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 1360 - 1365
  • [35] Towards layer-wise quantization for heterogeneous federated clients
    Xu, Yang
    Cheng, Junhao
    Xu, Hongli
    Guo, Changyu
    Liao, Yunming
    Yao, Zhiwei
    COMPUTER NETWORKS, 2025, 264
  • [36] Hardware-Centric AutoML for Mixed-Precision Quantization
    Wang, Kuan
    Liu, Zhijian
    Lin, Yujun
    Lin, Ji
    Han, Song
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (8-9) : 2035 - 2048
  • [37] REINFORCEMENT LEARNING-BASED LAYER-WISE QUANTIZATION FOR LIGHTWEIGHT DEEP NEURAL NETWORKS
    Jung, Juri
    Kim, Jonghee
    Kim, Youngeun
    Kim, Changick
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 3070 - 3074
  • [38] Interpreting Convolutional Neural Networks via Layer-Wise Relevance Propagation
    Jia, Wohuan
    Zhang, Shaoshuai
    Jiang, Yue
    Xu, Li
    ARTIFICIAL INTELLIGENCE AND SECURITY, ICAIS 2022, PT I, 2022, 13338 : 457 - 467
  • [39] Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training
    Zhou, Yefan
    Pang, Tianyu
    Liu, Keqin
    Martin, Charles H.
    Mahoney, Michael W.
    Yang, Yaoqing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [40] Mixed-Precision Collaborative Quantization for Fast Object Tracking
    Xie, Yefan
    Guo, Yanwei
    Hou, Xuan
    Zheng, Jiangbin
    ADVANCES IN BRAIN INSPIRED COGNITIVE SYSTEMS, BICS 2023, 2024, 14374 : 229 - 238