LAE-Net: A locally-adaptive emb e dding network for low-light image enhancement

被引:42
|
作者
Liu, Xiaokai [1 ]
Ma, Weihao [1 ]
Ma, Xiaorui [2 ]
Wang, Jie [1 ]
机构
[1] Dalian Maritime Univ, 1 Linghai Rd, Dalian 116026, Liaoning, Peoples R China
[2] Dalian Univ Technol, 2 Linggong Rd, Dalian 116024, Liaoning, Peoples R China
关键词
Locally; -adaptive; Image enhancement; Multi; -distribution; Image entropy; Kernel selection; CONTRAST ENHANCEMENT; QUALITY ASSESSMENT; HISTOGRAM;
D O I
10.1016/j.patcog.2022.109039
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the low-light enhancement task, one of the major challenges lies in how to balance the image en-hancement properties of light intensity, detail presentation and color fidelity. In natural scenes, the multi -distribution of frequency and illumination characteristics in the spatial domain makes the balance more difficult. To solve this problem, we propose a Locally-Adaptive Embedding Network, namely LAE-Net, to realize high-quality low-light image enhancement with locally-adaptive kernel selection and feature adaptation for multi-distribution issues. Specifically, for the frequency multi-distribution, we rethink the spatial-frequency characteristic of human eyes, experimentally explore the relationship among the re-ceptive field size, the image spatial frequency and the light enhancement properties, and propose an Entropy-Inspired Kernel-Selection Convolution, where each neuron can adaptively adjust the receptive field size according to its spatial frequency characterized by information entropy. For the illumination multi-distribution, we propose an Illumination Attentive Transfer subnet, where the neurons can simul-taneously sense global consistency and local details, and accordingly hint where to focus the efforts on, thereby adjusting the refined features. Extensive experiments with ablation analysis show the effective-ness of our method and the proposed method outperforms many related state-of-the-art techniques on four benchmark datasets: MEF, LIME, NPE and DICM.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Generative adversarial network for low-light image enhancement
    Li, Fei
    Zheng, Jiangbin
    Zhang, Yuan-fang
    IET IMAGE PROCESSING, 2021, 15 (07) : 1542 - 1552
  • [22] A Pipeline Neural Network for Low-Light Image Enhancement
    Guo, Yanhui
    Ke, Xue
    Ma, Jie
    Zhang, Jun
    IEEE ACCESS, 2019, 7 : 13737 - 13744
  • [23] Weight Uncertainty Network for Low-Light Image Enhancement
    Jin, Yutao
    Sun, Yue
    Chen, Xiaoyan
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VIII, ICIC 2024, 2024, 14869 : 106 - 117
  • [24] Exposure difference network for low-light image enhancement
    Jiang, Shengqin
    Mei, Yongyue
    Wang, Peng
    Liu, Qingshan
    PATTERN RECOGNITION, 2024, 156
  • [25] Hierarchical guided network for low-light image enhancement
    Feng, Xiaomei
    Li, Jinjiang
    Fan, Hui
    IET IMAGE PROCESSING, 2021, 15 (13) : 3254 - 3266
  • [26] sRrsR-Net: A New Low-Light Image Enhancement Network via Raw Image Reconstruction
    Hong, Zhiyong
    Zhen, Dexin
    Xiong, Liping
    Li, Xuechen
    Lin, Yuhan
    APPLIED SCIENCES-BASEL, 2025, 15 (01):
  • [27] Deep Lightening Network for Low-light Image Enhancement
    Wang, Li-Wen
    Liu, Zhi-Song
    Siu, Wan-Chi
    Lun, Daniel Pak-Kong
    2020 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2020,
  • [28] Invertible network for unpaired low-light image enhancement
    Zhang, Jize
    Wang, Haolin
    Wu, Xiaohe
    Zuo, Wangmeng
    VISUAL COMPUTER, 2024, 40 (01): : 109 - 120
  • [29] Cross-level feature adaptive fusion network for low-light image enhancement
    Liang, Liming
    Zhu, Chenkun
    Yang, Yuan
    Li, Renjie
    CHINESE JOURNAL OF LIQUID CRYSTALS AND DISPLAYS, 2024, 39 (06) : 856 - 866
  • [30] MPC-Net: Multi-Prior Collaborative Network for Low-Light Image Enhancement
    She, Chunyan
    Han, Fujun
    Wang, Lidan
    Duan, Shukai
    Huang, Tingwen
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (10) : 10385 - 10398