Image fusion technology consolidates data from various source images of a similar objective and performs extremely effective data complementation, which is commonly used in the transportation, medication, and surveillance fields. Because of the imaging instrument’s depth of field limitations, it is very hard to catch all the details of the scene and miss some important features. To solve this problem, this study provides a competent multi-focus image fusing technique based on deep learning. The algorithm collect features from the source input and feed these feature vectors into the convolutional neural network (CNN) to create feature maps. As a result, the focus map collects critical data for the image fusion. Focusmaps collected by the encoder is combined by using L2 norm and nuclear norm methods. Combined focusmaps are then given to Deep CNN to have the source images transformed effectively to the focus image. The proposed nuclear norm-based fusion model provides good evaluation metrics for Entropy, Mutual Information, normalized MI, Qabf\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$Q_{abf}$$\end{document}, and Structural Similarity Index Measure with values 7.6855, 8.7312, 1.1168, 0.7579, and 0.8669, respectively. The L2 norm strategy also provides good computational and experimental efficiency over other approaches. According to the experimental analysis of different approaches, the proposed research outperforms many other existing systems on a variety of performance parameters.