Learned Image Compression with Fixed-point Arithmetic

被引:6
|
作者
Sun, Heming [1 ,2 ,3 ]
Yu, Lu [2 ]
Katto, Jiro [1 ]
机构
[1] Waseda Univ, Shinjuku City, Japan
[2] Zhejiang Univ, Hangzhou, Peoples R China
[3] JST PRESTO, Saitama, Japan
关键词
Image compression; neural networks; quantization; fixed-point; fine-tuning;
D O I
10.1109/PCS50896.2021.9477496
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Learned image compression (LIC) has achieved superior coding performance than traditional image compression standards such as HEVC intra in terms of both PSNR and MS-SSIM. However, most LIC frameworks are based on floating-point arithmetic which has two potential problems. First is that using traditional 32-bit floating-point will consume huge memory and computational cost. Second is that the decoding might fail because of the floating-point error coming from different encoding/decoding platforms. To solve the above two problems. 1) We linearly quantize the weight in the main path to 8-bit fixed-point arithmetic, and propose a fine tuning scheme to reduce the coding loss caused by the quantization. Analysis transform and synthesis transform are fine tuned layer by layer. 2) We exploit look-up-table (LUT) for the cumulative distribution function (CDF) to avoid the floating-point error. When the latent node follows non-zero mean Gaussian distribution, to share the CDF LUT for different mean values, we restrict the range of latent node to be within a certain range around mean. As a result, 8-bit weight quantization can achieve negligible coding gain loss compared with 32-bit floating-point anchor. In addition, proposed CDF LUT can ensure the correct coding at various CPU and GPU hardware platforms.
引用
收藏
页码:106 / 110
页数:5
相关论文
共 50 条
  • [41] Provably Correct Posit Arithmetic with Fixed-Point Big Integer
    Chung, Shin Yee
    PROCEEDINGS OF THE CONFERENCE FOR NEXT GENERATION ARITHMETIC (CONGA'18), 2018,
  • [42] Synthesizing Optimal Fixed-Point Arithmetic for Embedded Signal Processing
    Hass, K. Joseph
    53RD IEEE INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS, 2010, : 61 - 64
  • [43] Empirical Evaluation of Fixed-Point Arithmetic for Deep Belief Networks
    Jiang, Jingfei
    Hu, Rongdong
    Lujan, Mikel
    Dou, Yong
    RECONFIGURABLE COMPUTING: ARCHITECTURES, TOOLS AND APPLICATIONS, 2013, 7806 : 225 - 225
  • [44] Code Generation for Neural Networks Based on Fixed-point Arithmetic
    Benmachnia, Hanane
    Martel, Matthieu
    Seladji, Yassamine
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2024, 23 (05)
  • [45] APPLICATION OF BROUWER FIXED-POINT THEOREM IN MATRIX INTERVAL ARITHMETIC
    THIELER, P
    COMPUTING, 1975, 14 (1-2) : 141 - 147
  • [46] Secure Multiparty Linear Programming Using Fixed-Point Arithmetic
    Catrina, Octavian
    de Hoogh, Sebastiaan
    COMPUTER SECURITY-ESORICS 2010, 2010, 6345 : 134 - +
  • [47] Efficient Emulation of Floating-Point Arithmetic on Fixed-Point SIMD Processors
    Gerlach, Lukas
    Paya-Vaya, Guillermo
    Blume, Holger
    2016 IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), 2016, : 254 - 259
  • [48] Fixed-Point CPWC Ultrasound Image Reconstruction
    Shi, Ji
    Rakhmatov, Daler
    2019 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IUS), 2019, : 1282 - 1285
  • [49] Fast fixed-point algorithm for image segmentation
    Li, Wei-Bin
    Yi, Xian
    Song, Song-He
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2015, 37 (10): : 2390 - 2396
  • [50] Quantization noise analysis of fixed-point pulse compression
    Ma, C.-M. (20801314@bit.edu.cn), 1600, Beijing Institute of Technology (33):