Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network

被引:5
|
作者
Wang, Zetian [1 ]
Wang, Fei [1 ,2 ]
Wu, Dan [1 ]
Gao, Guowang [1 ]
机构
[1] Xian Shiyou Univ, Sch Elect Engn, Xian 710065, Peoples R China
[2] Hunan Univ, State Key Lab Adv Design & Mfg Vehicle Body, Changsha 410082, Hunan, Peoples R China
关键词
image fusion; salience detection; convolution neural network; QUALITY ASSESSMENT;
D O I
10.3390/s22145430
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
This paper presents an algorithm for infrared and visible image fusion using significance detection and Convolutional Neural Networks with the aim of integrating discriminatory features and improving the overall quality of visual perception. Firstly, a global contrast-based significance detection algorithm is applied to the infrared image, so that salient features can be extracted, highlighting high brightness values and suppressing low brightness values and image noise. Secondly, a special loss function is designed for infrared images to guide the extraction and reconstruction of features in the network, based on the principle of salience detection, while the more mainstream gradient loss is used as the loss function for visible images in the network. Afterwards, a modified residual network is applied to complete the extraction of features and image reconstruction. Extensive qualitative and quantitative experiments have shown that fused images are sharper and contain more information about the scene, and the fused results look more like high-quality visible images. The generalization experiments also demonstrate that the proposed model has the ability to generalize well, independent of the limitations of the sensor. Overall, the algorithm proposed in this paper performs better compared to other state-of-the-art methods.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] MACCNet: Multiscale Attention and Cross- Convolutional Network for Infrared and Visible Image Fusion
    Yang, Yong
    Zhou, Na
    Wan, Weiguo
    Huang, Shuying
    IEEE SENSORS JOURNAL, 2024, 24 (10) : 16587 - 16600
  • [32] Infrared and visible image fusion using guided filter and convolutional sparse representation
    Liu X.-H.
    Chen Z.-B.
    Qin M.-Z.
    Chen, Zhi-Bin (shangxinboy@163.com), 2018, Chinese Academy of Sciences (26): : 1242 - 1253
  • [33] FERFusion: A Fast and Efficient Recursive Neural Network for Infrared and Visible Image Fusion
    Yang, Kaixuan
    Xiang, Wei
    Chen, Zhenshuai
    Liu, Yunpeng
    SENSORS, 2024, 24 (08)
  • [34] A Novel Infrared and Visible Image Fusion Approach Based on Adversarial Neural Network
    Chen, Xianglong
    Wang, Haipeng
    Liang, Yaohui
    Meng, Ying
    Wang, Shifeng
    SENSORS, 2022, 22 (01)
  • [35] Fusion of Infrared and Visible Images Using Fuzzy Based Siamese Convolutional Network
    Bhalla, Kanika
    Koundal, Deepika
    Bhatia, Surbhi
    Rahmani, Mohammad Khalid Imam
    Tahir, Muhammad
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 70 (03): : 5503 - 5518
  • [36] Thermal Defect Detection for Substation Equipment Based on Infrared Image Using Convolutional Neural Network
    Wang, Kaixuan
    Zhang, Jiaqiao
    Ni, Hongjun
    Ren, Fuji
    ELECTRONICS, 2021, 10 (16)
  • [37] Infrared and visible image fusion using structure-transferring fusion method
    Kong, Xiangyu
    Liu, Lei
    Qian, Yunsheng
    Wang, Yan
    INFRARED PHYSICS & TECHNOLOGY, 2019, 98 : 161 - 173
  • [38] Point clouds-image fusion by convolutional method for data integrity using neural network
    Zhang, Yan
    Bao, Hong
    2019 15TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND SECURITY (CIS 2019), 2019, : 345 - 348
  • [39] Multi-focus image fusion method using energy of Laplacian and convolutional neural network
    Zhai H.
    Zhuang Y.
    Harbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology, 2020, 52 (05): : 137 - 147
  • [40] Adaptive infrared and visible image fusion method by using rolling guidance filter and saliency detection
    Lin, Yingcheng
    Cao, Dingxin
    Zhou, Xichuan
    OPTIK, 2022, 262