CLF-Net: Contrastive Learning for Infrared and Visible Image Fusion Network

被引:25
|
作者
Zhu, Zhengjie [1 ]
Yang, Xiaogang [1 ]
Lu, Ruitao [1 ]
Shen, Tong [1 ]
Xie, Xueli [1 ]
Zhang, Tao [1 ]
机构
[1] Rocket Force Univ Engn, Coll Missile Engn, Xian 710038, Peoples R China
关键词
Feature extraction; Task analysis; Image fusion; Image reconstruction; Computer vision; Estimation; Visualization; Contrastive learning; image fusion; infrared image; noise contrastive estimation (NCE); unsupervised learning; PERFORMANCE; FRAMEWORK; COLOR; VIDEO; NEST;
D O I
10.1109/TIM.2022.3203000
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this article, we propose an effective infrared and visible image fusion network based on contrastive learning, which is called CLF-Net. A novel noise contrastive estimation framework is introduced into the image fusion to maximize the mutual information between the fused image and source images. First, an unsupervised contrastive learning framework is constructed to promote fused images selectively retaining the most similar features in local areas of different source images. Second, we design a robust contrastive loss based on the deep representations of images, combining with the structural similarity loss to effectively guide the network in extracting and reconstructing features. Specifically, based on the deep representation similarities and structural similarities between the fused image and source images, the loss functions can guide the feature extraction network in adaptively obtaining the salient targets of infrared images and background textures of visible images. Then, the features are reconstructed in the most appropriate manner. In addition, our method is an unsupervised end-to-end model. All of our methods have been tested on public datasets. Based on extensive qualitative and quantitative analysis results, it has been demonstrated that our proposed method performs better than the existing state-of-the-art fusion methods. Our code is publicly available at https://github.com/zzj-dyj/ CLF-Net
引用
收藏
页数:15
相关论文
共 50 条
  • [1] A Contrastive Learning Approach for Infrared-Visible Image Fusion
    Gupta, Ashish Kumar
    Barnwal, Meghna
    Mishra, Deepak
    PATTERN RECOGNITION AND MACHINE INTELLIGENCE, PREMI 2023, 2023, 14301 : 199 - 208
  • [2] DIVIDUAL: A Disentangled Visible And Infrared Image Fusion Contrastive Learning Method
    Yang, Shaoqi
    He, Dan
    JOURNAL OF APPLIED SCIENCE AND ENGINEERING, 2025, 28 (05): : 955 - 968
  • [3] IPLF: A Novel Image Pair Learning Fusion Network for Infrared and Visible Image
    Zhu, Depeng
    Zhan, Weida
    Jiang, Yichun
    Xu, Xiaoyu
    Guo, Renzhong
    IEEE SENSORS JOURNAL, 2022, 22 (09) : 8808 - 8817
  • [4] Denoiser Learning for Infrared and Visible Image Fusion
    Liu, Jinyang
    Li, Shutao
    Tan, Lishan
    Dian, Renwei
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [5] Interactive residual coordinate attention and contrastive learning for infrared and visible image fusion in triple frequency bands
    Xie, Zhihua
    Zong, Sha
    Li, Qiang
    Cai, Peiqi
    Zhan, Yaxiong
    Liu, Guodong
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [6] MSFNet: MultiStage Fusion Network for infrared and visible image fusion
    Wang, Chenwu
    Wu, Junsheng
    Zhu, Zhixiang
    Chen, Hao
    NEUROCOMPUTING, 2022, 507 : 26 - 39
  • [7] Interactive residual coordinate attention and contrastive learning for infrared and visible image fusion in triple frequency bands
    Zhihua Xie
    Sha Zong
    Qiang Li
    Peiqi Cai
    Yaxiong Zhan
    Guodong Liu
    Scientific Reports, 14
  • [8] MCnet: Multiscale visible image and infrared image fusion network
    Sun, Le
    Li, Yuhang
    Zheng, Min
    Zhong, Zhaoyi
    Zhang, Yanchun
    SIGNAL PROCESSING, 2023, 208
  • [9] An Infrared and Visible Image Fusion Network Based on Res2Net and Multiscale Transformer
    Tan, Binxi
    Yang, Bin
    SENSORS, 2025, 25 (03)
  • [10] S2F-Net: Shared-Specific Fusion Network for Infrared and Visible Image Fusion
    Zhao, Yijing
    Xia, Yuchao
    Ding, Yi
    Liu, Yumeng
    Liu, Shuai
    Wang, Hongan
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 497 - 505