Infrared and visible image fusion based on fast alternating guided filtering and CNN

被引:1
|
作者
Yang Y. [1 ]
Li Y. [1 ]
Dang J. [1 ]
Wang Y. [1 ]
机构
[1] School of Electronical and Information Engineering, Lanzhou Jiaotong University, Lanzhou
关键词
convolutional neural network; fast alternating guided filtering; infrared; infrared feature extraction; visible image fusion;
D O I
10.37188/OPE.20233110.1548
中图分类号
学科分类号
摘要
In order to solve the problems of the loss of detail information,blurred edges,and artifacts in infrared and visible image fusion,this paper proposes a fast alternating guided filter,which significantly increases the operation efficiency while ensuring the quality of the fused image. The proposed filer combines a convolutional neural network(CNN)and infrared feature extraction effective fusion. First,quadtree decomposition and Bessel interpolation are used to extract the infrared brightness features of the source images,and the initial fusion image is obtained by combining the visible image. Second,the information of the base layer and the detail layer of the source images is obtained through fast alternating guided filtering. The base layer obtains the fused base image through the CNN and Laplace transform,and the detail layer obtains the fused detail image through the saliency measurement method. Finally,the initial fusion map,basic fusion map,and detail fusion map are added to obtain the final fusion result. Because of the fast alternating guided filtering and feature extraction performance of this algorithm,the final fusion result contains rich texture details and clear edges. The experimental results indicate that the fusion results obtained by the algorithm have good fidelity in vision,and its objective evaluation indicators are compared with those of other methods. The information entropy,standard deviation,spatial frequency,wavelet feature mutual information, visual fidelity, and average gradient show improvements by 9. 9%, 6. 8%, 43. 6%,11. 3%,32. 3%,and 47. 1%,respectively,on average. © 2023 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:1548 / 1562
页数:14
相关论文
共 26 条
  • [1] LIU X H, CHEN Z B, QIN M Z., Infrared and visible image fusion using guided filter and convolutional sparse representation[J], Opt. Precision Eng, 26, 5, pp. 1242-1253, (2018)
  • [2] YANG Y H, ZHANG W S,, Et al., Moving object detection using tensor-based low-rank and saliently fused-sparse decomposition [J], IEEE Transactions on Image Processing, 26, 2, pp. 724-737, (2017)
  • [3] FANG L,, Et al., Pixel-level image fusion:a survey of the state of the art[J], Informa⁃ tion Fusion, 33, pp. 100-112, (2017)
  • [4] ZHANG X P,, Et al., A saliency prior context model for real-time object tracking[J], IEEE Transactions on Multimedia, 19, 11, pp. 2415-2424, (2017)
  • [5] WANG X, JI T B, LIU F., Fusion of infrared and visible images based on target segmentation and compressed sensing [J], Opt. Precision Eng, 24, 7, pp. 1743-1753, (2016)
  • [6] ZHANG L, JIN L X, HAN S L,, Et al., Fusion of infrared and visual images based on non-sampled Contourlet transform and region classification[J], Opt. Precision Eng, 23, 3, pp. 810-818, (2015)
  • [7] KEHTARNAVAZ N., Convolutional autoencoder-based multispectral image fusion[J], IEEE Access, 7, pp. 35673-35683, (2019)
  • [8] ZHOU D M,, NIE R C,, Et al., VIF-net:an unsupervised framework for infrared and visible image fusion[J], IEEE Transactions on Compu⁃ tational Imaging, 6, pp. 640-651, (2020)
  • [9] Multi-focus image fusion with a deep convolutional neural network [J], Information Fusion, 36, pp. 191-207, (2017)
  • [10] LIANG P,, Et al., FusionGAN:a generative adversarial network for infrared and visible image fusion[J], Information Fusion, 48, pp. 11-26, (2019)