IAIFNet: An Illumination-Aware Infrared and Visible Image Fusion Network

被引:6
|
作者
Yang, Qiao [1 ]
Zhang, Yu [2 ]
Zhao, Zijing [1 ]
Zhang, Jian [1 ]
Zhang, Shunli [1 ]
机构
[1] Beijing Jiaotong Univ, Sch Software Engn, Beijing 100044, Peoples R China
[2] Beihang Univ, Sch Astronaut, Beijing 100083, Peoples R China
基金
中国国家自然科学基金;
关键词
Adaptive differential fusion; image fusion; illumination enhancement; GENERATIVE ADVERSARIAL NETWORK; QUALITY ASSESSMENT; DEEP FRAMEWORK; ARCHITECTURE; NEST;
D O I
10.1109/LSP.2024.3399119
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Infrared and visible image fusion (IVIF) aims to create fused images that encompass the comprehensive features of both input images, thereby facilitating downstream vision tasks. However, existing methods often overlook illumination conditions in low-light environments, resulting in fused images where targets lack prominence. To address these shortcomings, we introduce the Illumination-Aware Infrared and Visible Image Fusion Network, abbreviated by IAIFNet. Within our framework, an illumination enhancement network initially estimates the incident illumination maps of input images, based on which the textural details of input images under low-light conditions are enhanced specifically. Subsequently, an image fusion network adeptly merges the salient features of illumination-enhanced infrared and visible images to produce a fusion image of superior visual quality. Our network incorporates a Salient Target Aware Module (STAM) and an Adaptive Differential Fusion Module (ADFM) to respectively enhance gradient and contrast with sensitivity to brightness. Extensive experimental results validate the superiority of our method over seven state-of-the-art approaches for fusing infrared and visible images on the public LLVIP dataset. Additionally, the lightweight design of our framework enables highly efficient fusion of infrared and visible images. Finally, evaluation results on the downstream multi-object detection task demonstrate the significant performance boost our method provides for detecting objects in low-light environments.
引用
收藏
页码:1374 / 1378
页数:5
相关论文
共 50 条
  • [41] Illumination-aware Digital Image Compositing for Full-length Human Figures
    Ohkawara, Masaru
    Fujishiro, Issei
    2021 INTERNATIONAL CONFERENCE ON CYBERWORLDS (CW 2021), 2021, : 17 - 24
  • [42] Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network
    Xu, Dongdong
    Wang, Yongcheng
    Xu, Shuyan
    Zhu, Kaiguang
    Zhang, Ning
    Zhang, Xin
    APPLIED SCIENCES-BASEL, 2020, 10 (02):
  • [43] Illumination-Aware Image Segmentation for Real-Time Moving Cast Shadow Suppression
    Ghahremannezhad, Hadi
    Shi, Hang
    Liu, Chengjun
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGING SYSTEMS AND TECHNIQUES (IST 2022), 2022,
  • [44] Makeup Extraction of 3D Representation via Illumination-Aware Image Decomposition
    Yang, Xingchao
    Taketomi, Takafumi
    Kanamori, Yoshihiro
    COMPUTER GRAPHICS FORUM, 2023, 42 (02) : 293 - 307
  • [45] Infrared and Visible Image Fusion Under Different Illumination Conditions Based on Illumination Effective Region Map
    Tong, Ying
    Chen, Jin
    IEEE ACCESS, 2019, 7 (151661-151668) : 151661 - 151668
  • [46] Probabilistic illumination-aware filtering for Monte Carlo rendering
    Doidge, Ian C.
    Jones, Mark W.
    VISUAL COMPUTER, 2013, 29 (6-8): : 707 - 716
  • [47] Infrared and visible image fusion based on global context network
    Li, Yonghong
    Shi, Yu
    Pu, Xingcheng
    Zhang, Suqiang
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (05)
  • [48] Infrared and visible image fusion with supervised convolutional neural network
    An, Wen-Bo
    Wang, Hong-Mei
    OPTIK, 2020, 219
  • [49] A Dual-branch Network for Infrared and Visible Image Fusion
    Fu, Yu
    Wu, Xiao-Jun
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 10675 - 10680
  • [50] Infrared and Visible Image Fusion with Convolutional Neural Network and Transformer
    Yang, Yang
    Ren, Zhennan
    Li, Beichen
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (16)