Interactive Feature Embedding for Infrared and Visible Image Fusion

被引:8
|
作者
Zhao, Fan [1 ]
Zhao, Wenda [2 ,3 ]
Lu, Huchuan [2 ,3 ]
机构
[1] Liaoning Normal Univ, Sch Phys & Elect Technol, Dalian 116029, Peoples R China
[2] Dalian Univ Technol, Key Lab Intelligent Control & Optimizat Ind Equipm, Minist Educ, Dalian 116024, Peoples R China
[3] Dalian Univ Technol, Sch Informat & Commun Engn, Dalian 116024, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Image fusion; Task analysis; Image reconstruction; Fuses; Self-supervised learning; Data mining; Hierarchical representations; infrared and visible image fusion; interactive feature embedding; self-supervised learning; MULTI-FOCUS; SPARSE REPRESENTATION; SHEARLET TRANSFORM; DECOMPOSITION; ENHANCEMENT; INFORMATION; FRAMEWORK;
D O I
10.1109/TNNLS.2023.3264911
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
General deep learning-based methods for infrared and visible image fusion rely on the unsupervised mechanism for vital information retention by utilizing elaborately designed loss functions. However, the unsupervised mechanism depends on a well-designed loss function, which cannot guarantee that all vital information of source images is sufficiently extracted. In this work, we propose a novel interactive feature embedding in a self-supervised learning framework for infrared and visible image fusion, attempting to overcome the issue of vital information degradation. With the help of a self-supervised learning framework, hierarchical representations of source images can be efficiently extracted. In particular, interactive feature embedding models are tactfully designed to build a bridge between self-supervised learning and infrared and visible image fusion learning, achieving vital information retention. Qualitative and quantitative evaluations exhibit that the proposed method performs favorably against state-of-the-art methods.
引用
收藏
页码:12810 / 12822
页数:13
相关论文
共 50 条
  • [1] An Infrared and Visible Image Fusion Approach of Self-calibrated Residual Networks and Feature Embedding
    Dai J.
    Luo Z.
    Li C.
    Recent Advances in Computer Science and Communications, 2023, 16 (02) : 2 - 13
  • [2] ITFuse: An interactive transformer for infrared and visible image fusion
    Tang, Wei
    He, Fazhi
    Liu, Yu
    PATTERN RECOGNITION, 2024, 156
  • [3] Infrared and Visible Image Fusion Based on Sparse Feature
    Ding Wen-shan
    Bi Du-yan
    He Lin-yuan
    Fan Zun-lin
    Wu Dong-peng
    ACTA PHOTONICA SINICA, 2018, 47 (09)
  • [4] MetaFusion: Infrared and Visible Image Fusion via Meta-Feature Embedding from Object Detection
    Zhao, Wenda
    Xie, Shigeng
    Zhao, Fan
    He, You
    Lu, Huchuan
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 13955 - 13965
  • [5] Infrared and Visible Image Fusion via General Feature Embedding From CLIP and DINOv2
    Luo, Yichuang
    Wang, Fang
    Liu, Xiaohu
    IEEE ACCESS, 2024, 12 : 99362 - 99371
  • [6] Infrared and visible image fusion and detection based on interactive training strategy and feature filter extraction module
    Chen, Bingxin
    Luo, Shaojuan
    Wu, Heng
    Chen, Meiyun
    He, Chunhua
    OPTICS AND LASER TECHNOLOGY, 2024, 179
  • [7] SFINet: A semantic feature interactive learning network for full-time infrared and visible image fusion
    Song, Wenhao
    Li, Qilei
    Gao, Mingliang
    Chehri, Abdellah
    Jeon, Gwanggil
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 261
  • [8] Review of Feature-Level Infrared and Visible Image Fusion
    Zhang, Honggang
    Yang, Haitao
    Zheng, Fengjie
    Wang, Jinyu
    Zhou, Xixuan
    Wang, Haoyu
    Xu, Yifan
    Computer Engineering and Applications, 2024, 60 (18) : 17 - 31
  • [9] FDFuse: Infrared and Visible Image Fusion Based on Feature Decomposition
    Cheng, Muhang
    Huang, Haiyan
    Liu, Xiangyu
    Mo, Hongwei
    Wu, Songling
    Zhao, Xiongbo
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2025, 74
  • [10] SIE: infrared and visible image fusion based on scene information embedding
    Geng Y.
    Diao W.
    Zhao Y.
    Multimedia Tools and Applications, 2025, 84 (3) : 1463 - 1488