PFCFuse: A Poolformer and CNN Fusion Network for Infrared-Visible Image Fusion

被引:0
|
作者
Hu, Xinyu [1 ]
Liu, Yang [2 ]
Yang, Feng [3 ]
机构
[1] Guangxi Univ, Sch Comp Elect & Informat, Nanning 530004, Peoples R China
[2] Univ Oulu, Ctr Machine Vis & Signal Anal, Oulu 90014, Finland
[3] Guangxi Univ, Guangxi Key Lab Multimedia Commun Network Technol, Nanning 530004, Peoples R China
基金
中国国家自然科学基金;
关键词
Adaptation models; Visualization; Statistical analysis; Predictive models; Feature extraction; Transformers; Data models; Dual-branch feature extraction; infrared image; multimodal image fusion; poolformer; visible image;
D O I
10.1109/TIM.2024.3450061
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Infrared visible image fusion plays a central role in multimodal image fusion. By integrating feature information, we obtain more comprehensive and richer visual data to enhance image quality. However, current image fusion methods often rely on intricate networks to extract parameters from multimodal source images, making it challenging to leverage valuable information for high-quality fusion results completely. In this research, we propose a Poolformer-convolutional neural network (CNN) dual-branch feature extraction fusion network for the fusion of infrared and visible images, termed PFCFuse. This network fully exploits key features in the images and adaptively preserves critical features in the images. To begin with, we provide a feature extractor with a dual-branch poolformer-CNN, using poolformer blocks to extract low-frequency global information, where the basic spatial pooling procedures are used as a substitute for the attention module of the transformer. Second, the model is designed with an adaptively adjusted a-Huber loss, which can stably adjust model parameters and reduce the influence of outliers on model predictions, thereby enhancing the model's robustness while maintaining precision. Compared with state-of-the-art fusion models such as U2Fusion, RFNet, TarDAL, and CDDFuse, we obtain excellent experimental results in both qualitative and quantitative experiments. Compared to the latest dual-branch feature extraction, CDDFuse, our model parameters are reduced by half. The code is available at https://github.com/HXY13/PFCFuse-Image-Fusion.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] FusionGAN: A generative adversarial network for infrared and visible image fusion
    Ma, Jiayi
    Yu, Wei
    Liang, Pengwei
    Li, Chang
    Jiang, Junjun
    INFORMATION FUSION, 2019, 48 : 11 - 26
  • [42] IDFusion: An Infrared and Visible Image Fusion Network for Illuminating Darkness
    Lv, Guohua
    Wang, Xiyan
    Wei, Zhonghe
    Cheng, Jinyong
    Ma, Guangxiao
    Bao, Hanju
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 3140 - 3145
  • [43] Unsupervised densely attention network for infrared and visible image fusion
    Li, Yang
    Wang, Jixiao
    Miao, Zhuang
    Wang, Jiabao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (45-46) : 34685 - 34696
  • [44] A Multilevel Hybrid Transmission Network for Infrared and Visible Image Fusion
    Li, Qingqing
    Han, Guangliang
    Liu, Peixun
    Yang, Hang
    Chen, Dianbing
    Sun, Xinglong
    Wu, Jiajia
    Liu, Dongxu
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [45] Unsupervised densely attention network for infrared and visible image fusion
    Yang Li
    Jixiao Wang
    Zhuang Miao
    Jiabao Wang
    Multimedia Tools and Applications, 2020, 79 : 34685 - 34696
  • [46] Infrared and visible image fusion of convolutional neural network and NSST
    Huan K.
    Li X.
    Cao Y.
    Chen X.
    Hongwai yu Jiguang Gongcheng/Infrared and Laser Engineering, 2022, 51 (03):
  • [47] Multiscale channel attention network for infrared and visible image fusion
    Zhu, Jiahui
    Dou, Qingyu
    Jian, Lihua
    Liu, Kai
    Hussain, Farhan
    Yang, Xiaomin
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2021, 33 (22):
  • [48] HDCCT: Hybrid Densely Connected CNN and Transformer for Infrared and Visible Image Fusion
    Li, Xue
    He, Hui
    Shi, Jin
    ELECTRONICS, 2024, 13 (17)
  • [49] Infrared and visible image fusion based on fast alternating guided filtering and CNN
    Yang Y.
    Li Y.
    Dang J.
    Wang Y.
    Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2023, 31 (10): : 1548 - 1562
  • [50] Infrared and Visible Image Fusion Based on Autoencoder Composed of CNN-Transformer
    Wang, Hongmei
    Li, Lin
    Li, Chenkai
    Lu, Xuanyu
    IEEE ACCESS, 2023, 11 : 78956 - 78969