Multi-Scenario Aware Infrared and Visible Image Fusion Framework Based on Visual Multi-Pathway Mechanism

被引:0
|
作者
Gao, Shaobing [1 ]
Zhan, Zongyi [1 ]
Kuang, Mei [1 ]
机构
[1] Sichuan Univ, Coll Comp Sci, Chengdu 610065, Peoples R China
基金
中国国家自然科学基金;
关键词
Infrared and visible image fusion; Brain-inspired computing; Multi-scenario aware framework; NETWORK;
D O I
10.11999/JEIT221361
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Most existing infrared and visible image fusion methods neglect the disparities between daytime and nighttime scenarios and consider them similar, leading to low accuracy. However, the adaptive properties of the biological vision system allow for the capture of helpful information from source images and adaptive visual information processing. This concept provides a new direction for improving the accuracy of the deep-learning -based infrared and visible image fusion methods. Inspired by the visual multi-pathway mechanism, this study proposes a multi-scenario aware infrared and visible image fusion framework to incorporate two distinct visual pathways capable of perceiving daytime and nighttime scenarios. Specifically, daytime-and nighttime-scenario -aware fusion networks process the source images to generate two intermediate fusion results. Finally, a learnable weighting network obtains the final result. Additionally, the proposed framework utilizes a novel center-surround convolution module that simulates the widely distributed center-surround receptive field in biological vision. Qualitative and quantitative experiments demonstrate that the proposed framework improves significantly the quality of the fused image and outperforms existing methods in objective evaluation metrics.
引用
收藏
页码:2749 / 2758
页数:10
相关论文
共 25 条
  • [1] Angelucci A, 2014, NEW VISUAL NEUROSCIENCES, P425
  • [2] Contrast-Dependent Variations in the Excitatory Classical Receptive Field and Suppressive Nonclassical Receptive Field of Cat Primary Visual Cortex
    Chen, Ke
    Song, Xue-Mei
    Li, Chao-Yi
    [J]. CEREBRAL CORTEX, 2013, 23 (02) : 283 - 292
  • [3] Infrared and visible images fusion based on RPCA and NSCT
    Fu, Zhizhong
    Wang, Xue
    Xu, Jin
    Zhou, Ning
    Zhao, Yufei
    [J]. INFRARED PHYSICS & TECHNOLOGY, 2016, 77 : 114 - 123
  • [4] Color Constancy Using Double-Opponency
    Gao, Shao-Bing
    Yang, Kai-Fu
    Li, Chao-Yi
    Li, Yong-Jie
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (10) : 1973 - 1985
  • [5] SEPARATE VISUAL PATHWAYS FOR PERCEPTION AND ACTION
    GOODALE, MA
    MILNER, AD
    [J]. TRENDS IN NEUROSCIENCES, 1992, 15 (01) : 20 - 25
  • [6] RFN-Nest: An end-to-end residual fusion network for infrared and visible images
    Li, Hui
    Wu, Xiao-Jun
    Kittler, Josef
    [J]. INFORMATION FUSION, 2021, 73 : 72 - 86
  • [7] Li H, 2018, INT C PATT RECOG, P2705, DOI 10.1109/ICPR.2018.8546006
  • [8] DenseFuse: A Fusion Approach to Infrared and Visible Images
    Li, Hui
    Wu, Xiao-Jun
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (05) : 2614 - 2623
  • [9] Microsoft COCO: Common Objects in Context
    Lin, Tsung-Yi
    Maire, Michael
    Belongie, Serge
    Hays, James
    Perona, Pietro
    Ramanan, Deva
    Dollar, Piotr
    Zitnick, C. Lawrence
    [J]. COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 : 740 - 755
  • [10] Image Fusion With Convolutional Sparse Representation
    Liu, Yu
    Chen, Xun
    Ward, Rabab K.
    Wang, Z. Jane
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2016, 23 (12) : 1882 - 1886