Infrared and Visible Image Fusion via Attention-Based Adaptive Feature Fusion

被引:5
|
作者
Wang, Lei [1 ]
Hu, Ziming [1 ]
Kong, Quan [2 ]
Qi, Qian [1 ]
Liao, Qing [1 ]
机构
[1] Wuhan Inst Technol, Hubei Key Lab Opt Informat & Pattern Recognit, Wuhan 430205, Peoples R China
[2] Wuhan Inst Technol, Sch Art & Design, Wuhan 430205, Peoples R China
关键词
image fusion; adaptive fusion strategy; attention mechanism; NETWORK;
D O I
10.3390/e25030407
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Infrared and visible image fusion methods based on feature decomposition are able to generate good fused images. However, most of them employ manually designed simple feature fusion strategies in the reconstruction stage, such as addition or concatenation fusion strategies. These strategies do not pay attention to the relative importance between different features and thus may suffer from issues such as low-contrast, blurring results or information loss. To address this problem, we designed an adaptive fusion network to synthesize decoupled common structural features and distinct modal features under an attention-based adaptive fusion (AAF) strategy. The AAF module adaptively computes different weights assigned to different features according to their relative importance. Moreover, the structural features from different sources are also synthesized under the AAF strategy before reconstruction, to provide a more entire structure information. More important features are thus paid more attention to automatically and advantageous information contained in these features manifests itself more reasonably in the final fused images. Experiments on several datasets demonstrated an obvious improvement of image fusion quality using our method.
引用
收藏
页数:21
相关论文
共 50 条
  • [31] Infrared and Visible Image Fusion Method via Interactive Self-attention
    Yang Fan
    Wang Zhishe
    Sun Jing
    Yu Zhaofa
    ACTA PHOTONICA SINICA, 2024, 53 (06)
  • [32] Region parallel fusion algorithm based on infrared and visible image feature
    Tong Wu-qin
    Yang Hua
    Huang Chao-chao
    Jin Wei
    Yang Li
    INTERNATIONAL SYMPOSIUM ON PHOTOELECTRONIC DETECTION AND IMAGING 2007: IMAGE PROCESSING, 2008, 6623
  • [33] Fusion of infrared and visible images based on image enhancement and feature extraction
    Luo, Jinzhe
    Rong, Chuanzhen
    Jia, Yongxing
    Yang, Yu
    Zhu, Ying
    2019 11TH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC 2019), VOL 1, 2019, : 212 - 216
  • [34] Infrared and Visible Image Fusion Algorithm Based on Feature Optimization and GAN
    Hao Shuai
    Li Jiahao
    Ma Xu
    He Tian
    Sun Siyan
    Li Tong
    ACTA PHOTONICA SINICA, 2023, 52 (12)
  • [35] Multigrained Attention Network for Infrared and Visible Image Fusion
    Li, Jing
    Huo, Hongtao
    Li, Chang
    Wang, Renhua
    Sui, Chenhong
    Liu, Zhao
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
  • [36] HATF: Multi-Modal Feature Learning for Infrared and Visible Image Fusion via Hybrid Attention Transformer
    Liu, Xiangzeng
    Wang, Ziyao
    Gao, Haojie
    Li, Xiang
    Wang, Lei
    Miao, Qiguang
    REMOTE SENSING, 2024, 16 (05)
  • [37] Infrared and Visible Image Fusion Based on Innovation Feature Simultaneous Decomposition
    He, Guiqing
    Dong, Dandan
    Xing, Siyuan
    Zhao, Ximei
    2017 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC 2017), 2017, : 1174 - 1177
  • [38] A three-dimensional feature-based fusion strategy for infrared and visible image fusion
    Liu, Xiaowen
    Huo, Hongtao
    Yang, Xin
    Li, Jing
    PATTERN RECOGNITION, 2025, 157
  • [39] Interactive Feature Embedding for Infrared and Visible Image Fusion
    Zhao, Fan
    Zhao, Wenda
    Lu, Huchuan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 12810 - 12822
  • [40] Infrared and Visible Image Fusion Based on Saliency Adaptive Weight Map
    Ding Haiyang
    Dong Mingli
    Liu Chenhua
    Lu Xitian
    Guo Chentong
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (10)