Low-light image dehazing network with aggregated context-aware attention

被引:0
|
作者
Wang K. [1 ]
Cheng J. [1 ]
Huang S. [1 ]
Cai K. [1 ]
Wang W. [1 ]
Li Y. [1 ]
机构
[1] State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an
关键词
attention mechanism; color shift loss; deep learning; feature fusion; low-light image dehazing;
D O I
10.19665/j.issn1001-2400.2023.02.003
中图分类号
学科分类号
摘要
Existing low-light dehazing algorithms are affected by the low and uneven illumination of the hazy images with their dehazed images often suffering from loss of details and color distortion. To address the above problems, a low-light image dehazing network with aggregated context-aware attention (ACANet) is proposed. First, an intra-layer context-aware attention module is introduced to identify and highlight significant features at the same scale from the channel dimension and the spatial dimension, respectively, so that the network can break through the constraints of the local field of view, and extract image texture information more efficiently. Second, an inter-layer context-aware attention module is introduced to efficiently fuse multi-scale features and the advanced features are mapped to the signal subspace through projection operations in order to further enhance the reconstruction of image details. Finally, the CIEDE2000 color shift loss function is adopted to constrain the image hue by CIELAB color space and jointly optimize the network together with L2 loss so as to enable the network to learn image colors accurately and solve the severe color shift problem. Both quantitative and qualitative experimental results on several datasets demonstrate that the proposed ACANet outperforms existing dehazing methods. Specifically, the ACANet improves the PSNR of dehazed images by 8.8% compared to the baseline network, and enhances the image visibility with richer details and more natural color. © 2023 Science Press. All rights reserved.
引用
收藏
页码:23 / 32
页数:9
相关论文
共 20 条
  • [1] SUN Jingrong, XIE Linchang, DU Mengxin, Et al., A Nonlinear Transform Adaptive Transmittance Dehazing Algorithm, Journal ol Xidian University, 49, 1, pp. 208-215, (2022)
  • [2] LIU Y, WANG A, ZHOU H, JIA P., Single Nighttime Image Dehazing Based on Image Decomposition, Signal Processing, 183, 5, (2021)
  • [3] JIANG B, MENG H, MA X, Et al., Nighttime Image Dehazing with Modified Models of Color Transfer and Guided Image Filter [J], Multimedia Tools and Applications, 77, 3, pp. 3125-3141, (2018)
  • [4] ZHANG J, CAO Y, WANG Z., Nighttime Haze Removal Based on a New Imaging Model, 2014 IEEE International Conference on Image Processing (ICIP), pp. 4557-4561, (2014)
  • [5] ZHANG J, CAO Y, FANG S, Et al., Fast Haze Removal for Nighttime Image Using Maximum Reflectance Prior [C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7418-7426, (2017)
  • [6] LIU Y, YAN Z, WU A, Et al., Nighttime Image Dehazing Based on Variational Decomposition Model [C], Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 640-649, (2022)
  • [7] LIAO Y, SU Z, LIANG X, Et al., Hdp-Net: Haze Density Prediction Network for Nighttime Dehazing, Pacific Rim Conference on Multimedia, pp. 469-480, (2018)
  • [8] YAN W, TAN R T, DAI D., Nighttime Defogging Using High-Low Frequency Decomposition and Grayscal E-Color Networks [C], European Conference on Computer Vision, pp. 473-488, (2020)
  • [9] WANG B, HU L, WEI B, Et al., Nighttime Image Dehazing Using Color Cast Removal and Dual Path Multi-Scale Fusion Strategy[J], Frontiers of Computer Science, 4, pp. 147-159, (2022)
  • [10] DONG H, PAN J, XIANG L, Et al., Multi-Scale Boosted Dehazing Network with Dense Feature Fusion, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2157-2167, (2020)