An autoencoder deep residual network model for multi focus image fusion

被引:0
|
作者
Shihabudeen H
Rajeesh J
机构
[1] APJ Abdul Kalam Technological University,College of Engineering Thalassery
[2] College of Engineering Kidangoor,Department of Electronics
来源
关键词
Deep Learning; Deep CNN; Image fusion; Decoder; Multifocus; Depth of field;
D O I
暂无
中图分类号
学科分类号
摘要
Image fusion technology consolidates data from various source images of a similar objective and performs extremely effective data complementation, which is commonly used in the transportation, medication, and surveillance fields. Because of the imaging instrument’s depth of field limitations, it is very hard to catch all the details of the scene and miss some important features. To solve this problem, this study provides a competent multi-focus image fusing technique based on deep learning. The algorithm collect features from the source input and feed these feature vectors into the convolutional neural network (CNN) to create feature maps. As a result, the focus map collects critical data for the image fusion. Focusmaps collected by the encoder is combined by using L2 norm and nuclear norm methods. Combined focusmaps are then given to Deep CNN to have the source images transformed effectively to the focus image. The proposed nuclear norm-based fusion model provides good evaluation metrics for Entropy, Mutual Information, normalized MI, Qabf\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_{abf}$$\end{document}, and Structural Similarity Index Measure with values 7.6855, 8.7312, 1.1168, 0.7579, and 0.8669, respectively. The L2 norm strategy also provides good computational and experimental efficiency over other approaches. According to the experimental analysis of different approaches, the proposed research outperforms many other existing systems on a variety of performance parameters.
引用
收藏
页码:34773 / 34794
页数:21
相关论文
共 50 条
  • [21] SESF-Fuse: an unsupervised deep model for multi-focus image fusion
    Ma, Boyuan
    Zhu, Yu
    Yin, Xiang
    Ban, Xiaojuan
    Huang, Haiyou
    Mukeshimana, Michele
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (11): : 5793 - 5804
  • [22] Multi-focus image fusion method using energy of Laplacian and a deep neural network
    Zhai, Hao
    Zhuang, Yi
    Applied Optics, 2020, 59 (06): : 1684 - 1694
  • [23] Multi-focus image fusion using deep support value convolutional neural network
    Du, ChaoBen
    Gao, SheSheng
    Liu, Ying
    Gao, BingBing
    OPTIK, 2019, 176 : 567 - 578
  • [24] Multi-focus image fusion method using energy of Laplacian and a deep neural network
    Zhai, Hao
    Zhuang, Yi
    APPLIED OPTICS, 2020, 59 (06) : 1684 - 1694
  • [25] Multi-Focus Image Fusion Based on Residual Network in Non-Subsampled Shearlet Domain
    Liu, Shuaiqi
    Wang, Jie
    Lu, Yucong
    Hu, Shaohai
    Ma, Xiaole
    Wu, Yifei
    IEEE ACCESS, 2019, 7 : 152043 - 152063
  • [26] SINGLE SENSOR IMAGE FUSION USING A DEEP RESIDUAL NETWORK
    Palsson, Frosti
    Sveinsson, Johannes R.
    Ulfarsson, Magnus O.
    2018 9TH WORKSHOP ON HYPERSPECTRAL IMAGE AND SIGNAL PROCESSING: EVOLUTION IN REMOTE SENSING (WHISPERS), 2018,
  • [27] Multi-Scale Visual Attention Deep Convolutional Neural Network for Multi-Focus Image Fusion
    Lai, Rui
    Li, Yongxue
    Guan, Juntao
    Xiong, Ai
    IEEE ACCESS, 2019, 7 : 114385 - 114399
  • [28] Multi-focus Image Fusion: Neural Network Approach
    Deshmukh, Vaidehi
    Chandsare, Aditi
    Gotmare, Vaishnavi
    Patil, Atul
    2017 INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATION, CONTROL AND AUTOMATION (ICCUBEA), 2017,
  • [29] LNMF: lightweight network for multi-focus image fusion
    Zhou, Yang
    Liu, Kai
    Dou, Qingyu
    Liu, Zitao
    Jeon, Gwanggil
    Yang, Xiaomin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (16) : 22335 - 22353
  • [30] LNMF: lightweight network for multi-focus image fusion
    Yang Zhou
    Kai Liu
    Qingyu Dou
    Zitao Liu
    Gwanggil Jeon
    Xiaomin Yang
    Multimedia Tools and Applications, 2022, 81 : 22335 - 22353