PFCFuse: A Poolformer and CNN Fusion Network for Infrared-Visible Image Fusion

被引:0
|
作者
Hu, Xinyu [1 ]
Liu, Yang [2 ]
Yang, Feng [3 ]
机构
[1] Guangxi Univ, Sch Comp Elect & Informat, Nanning 530004, Peoples R China
[2] Univ Oulu, Ctr Machine Vis & Signal Anal, Oulu 90014, Finland
[3] Guangxi Univ, Guangxi Key Lab Multimedia Commun Network Technol, Nanning 530004, Peoples R China
基金
中国国家自然科学基金;
关键词
Adaptation models; Visualization; Statistical analysis; Predictive models; Feature extraction; Transformers; Data models; Dual-branch feature extraction; infrared image; multimodal image fusion; poolformer; visible image;
D O I
10.1109/TIM.2024.3450061
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Infrared visible image fusion plays a central role in multimodal image fusion. By integrating feature information, we obtain more comprehensive and richer visual data to enhance image quality. However, current image fusion methods often rely on intricate networks to extract parameters from multimodal source images, making it challenging to leverage valuable information for high-quality fusion results completely. In this research, we propose a Poolformer-convolutional neural network (CNN) dual-branch feature extraction fusion network for the fusion of infrared and visible images, termed PFCFuse. This network fully exploits key features in the images and adaptively preserves critical features in the images. To begin with, we provide a feature extractor with a dual-branch poolformer-CNN, using poolformer blocks to extract low-frequency global information, where the basic spatial pooling procedures are used as a substitute for the attention module of the transformer. Second, the model is designed with an adaptively adjusted a-Huber loss, which can stably adjust model parameters and reduce the influence of outliers on model predictions, thereby enhancing the model's robustness while maintaining precision. Compared with state-of-the-art fusion models such as U2Fusion, RFNet, TarDAL, and CDDFuse, we obtain excellent experimental results in both qualitative and quantitative experiments. Compared to the latest dual-branch feature extraction, CDDFuse, our model parameters are reduced by half. The code is available at https://github.com/HXY13/PFCFuse-Image-Fusion.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Distillation-fusion-semantic unified driven network for infrared and visible image fusion
    Jiang, Yang
    Li, Jiawei
    Liu, Jinyuan
    Lei, Jia
    Li, Chen
    Zhou, Shihua
    Kasabov, Nikola K.
    INFRARED PHYSICS & TECHNOLOGY, 2024, 137
  • [32] DFPGAN: Dual fusion path generative adversarial network for infrared and visible image fusion
    Yi, Shi
    Li, Junjie
    Yuan, Xuesong
    INFRARED PHYSICS & TECHNOLOGY, 2021, 119
  • [33] Infrared and Visible Image Fusion via Multiscale Receptive Field Amplification Fusion Network
    Ji, Chuanming
    Zhou, Wujie
    Lei, Jingsheng
    Ye, Lv
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 493 - 497
  • [34] Multi-Level Adaptive Attention Fusion Network for Infrared and Visible Image Fusion
    Hu, Ziming
    Kong, Quan
    Liao, Qing
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 366 - 370
  • [35] INFRARED-VISIBLE IMAGE FUSION USING THE UNDECIMATED WAVELET TRANSFORM WITH SPECTRAL FACTORIZATION AND TARGET EXTRACTION
    Ellmauthaler, Andreas
    da Silva, Eduardo A. B.
    Pagliari, Carla L.
    Neves, Sergio R.
    2012 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2012), 2012, : 2661 - 2664
  • [36] Infrared and visible image fusion based on global context network
    Li, Yonghong
    Shi, Yu
    Pu, Xingcheng
    Zhang, Suqiang
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (05)
  • [37] Infrared and visible image fusion with supervised convolutional neural network
    An, Wen-Bo
    Wang, Hong-Mei
    OPTIK, 2020, 219
  • [38] A Dual-branch Network for Infrared and Visible Image Fusion
    Fu, Yu
    Wu, Xiao-Jun
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 10675 - 10680
  • [39] Infrared and Visible Image Fusion with Convolutional Neural Network and Transformer
    Yang, Yang
    Ren, Zhennan
    Li, Beichen
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (16)
  • [40] MAFusion: Multiscale Attention Network for Infrared and Visible Image Fusion
    Li, Xiaoling
    Chen, Houjin
    Li, Yanfeng
    Peng, Yahui
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71