Learned Image Compression with Fixed-point Arithmetic

被引:6
|
作者
Sun, Heming [1 ,2 ,3 ]
Yu, Lu [2 ]
Katto, Jiro [1 ]
机构
[1] Waseda Univ, Shinjuku City, Japan
[2] Zhejiang Univ, Hangzhou, Peoples R China
[3] JST PRESTO, Saitama, Japan
关键词
Image compression; neural networks; quantization; fixed-point; fine-tuning;
D O I
10.1109/PCS50896.2021.9477496
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Learned image compression (LIC) has achieved superior coding performance than traditional image compression standards such as HEVC intra in terms of both PSNR and MS-SSIM. However, most LIC frameworks are based on floating-point arithmetic which has two potential problems. First is that using traditional 32-bit floating-point will consume huge memory and computational cost. Second is that the decoding might fail because of the floating-point error coming from different encoding/decoding platforms. To solve the above two problems. 1) We linearly quantize the weight in the main path to 8-bit fixed-point arithmetic, and propose a fine tuning scheme to reduce the coding loss caused by the quantization. Analysis transform and synthesis transform are fine tuned layer by layer. 2) We exploit look-up-table (LUT) for the cumulative distribution function (CDF) to avoid the floating-point error. When the latent node follows non-zero mean Gaussian distribution, to share the CDF LUT for different mean values, we restrict the range of latent node to be within a certain range around mean. As a result, 8-bit weight quantization can achieve negligible coding gain loss compared with 32-bit floating-point anchor. In addition, proposed CDF LUT can ensure the correct coding at various CPU and GPU hardware platforms.
引用
收藏
页码:106 / 110
页数:5
相关论文
共 50 条
  • [1] Fixed-point Predictive AMBTC Image Compression
    Zhang, Mingming
    Zhou, Quan
    Hu, Yanlang
    Liu, Juanni
    ELEVENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2019), 2020, 11373
  • [2] Fixed-Point Arithmetic in FPGA
    Becvar, M.
    Stukjunger, P.
    ACTA POLYTECHNICA, 2005, 45 (02) : 67 - 72
  • [3] Massive MIMO in Fixed-Point Arithmetic
    Tian, Mi
    Sima, Mihai
    McGuire, Michael
    2021 23RD INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT 2021): ON-LINE SECURITY IN PANDEMIC ERA, 2021, : 91 - 95
  • [4] Formalization of fixed-point arithmetic in HOL
    Akbarpour, B
    Tahar, S
    Dekdouk, A
    FORMAL METHODS IN SYSTEM DESIGN, 2005, 27 (1-2) : 173 - 200
  • [5] Massive MIMO in Fixed-Point Arithmetic
    Tian, Mi
    Sima, Mihai
    McGuire, Michael
    2022 24TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): ARITIFLCIAL INTELLIGENCE TECHNOLOGIES TOWARD CYBERSECURITY, 2022, : 91 - 95
  • [6] An SMT Theory of Fixed-Point Arithmetic
    Baranowski, Marek
    He, Shaobo
    Lechner, Mathias
    Nguyen, Thanh Son
    Rakamaric, Zvonimir
    AUTOMATED REASONING, PT I, 2020, 12166 : 13 - 31
  • [7] Simulation of the fixed-point number arithmetic
    Wang, Feng
    Zheng, Xiaoli
    MATERIALS, MECHATRONICS AND AUTOMATION, PTS 1-3, 2011, 467-469 : 2097 - +
  • [8] Formalization of Fixed-Point Arithmetic in HOL
    Behzad Akbarpour
    Sofiène Tahar
    Abdelkader Dekdouk
    Formal Methods in System Design, 2005, 27 : 173 - 200
  • [9] Fixed-point arithmetic line clipping
    Mollá, R
    Jorquera, P
    Vivó, R
    WSCG'2003 POSTER PROCEEDINGS, 2003, : 93 - 96
  • [10] Fixed-point arithmetic for embedded systems
    C/C++ Users J, 2 (21):