MSDCNet: Multi-stage and deep residual complementary multi-focus image fusion network based on multi-scale feature learning

被引:0
|
作者
Hu, Gang [1 ,2 ]
Jiang, Jinlin [1 ]
Sheng, Guanglei [1 ]
Wei, Guo [3 ]
机构
[1] Xian Univ Technol, Sch Comp Sci & Engn, Xian 710048, Shaanxi, Peoples R China
[2] Xian Univ Technol, Dept Appl Math, Xian 710054, Shaanxi, Peoples R China
[3] Univ North Carolina Pembroke, Pembroke, NC 28372 USA
基金
中国国家自然科学基金;
关键词
Multi-focus; Image fusion; Deep learning; Multi-scale feature extraction; Residual complementary;
D O I
10.1007/s10489-024-05983-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Addressing the boundary blurring problem in focus and out-of-focus regions is a key area of research in multifocus image fusion. Effective utilization of multi-scale modules is essential for enhancing performance. Therefore, this paper proposes a multi-stage feature extraction and deep residual complementary multifocus image fusion network. In the feature extraction stage, the V-shaped connection module captures the main objects and contours of the image. The feature thinning extraction module uses extended convolution to learn image details and refine textures at multiple scales. The advanced feature texture enhancement module targets boundary blurring regions, enhancing texture details and improving fusion quality. Asymmetric convolution reduces the network's computational burden, improving feature learning efficiency. The fusion strategy uses a compound loss function to ensure image quality and prevent color distortion. The image reconstruction module uses residual connections with different-sized convolution kernels to maintain feature consistency and improve image quality. The network utilizes a dual-path Pseudo-Siamese structure, which handles image focus and defocus regions separately. Experimental results demonstrate the algorithm's effectiveness. On the Lytro dataset, it achieves AG and EI metric values of 6.9 and 72.5, respectively, outperforming other methods. Fusion metrics SD = 61.80, SF = 19.63, and VIF = 0.94 surpass existing algorithms, effectively resolving the boundary blurring problem and providing better visual perception and broader applicability.
引用
收藏
页数:22
相关论文
共 50 条
  • [31] Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure
    Zhang, Yu
    Bai, Xiangzhi
    Wang, Tao
    INFORMATION FUSION, 2017, 35 : 81 - 101
  • [32] Fire Detection Method Based on Deep Residual Network and Multi-Scale Feature Fusion
    Xiao, Zehao
    Dong, Enzeng
    Du, Shengzhi
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 4810 - 4815
  • [33] Facial Expression Image Classification Based on Multi-scale Feature Fusion Residual Network
    Zhao, Yuxi
    Wang, Chunzhi
    Zhou, Xianjing
    Liu, Hu
    Communications in Computer and Information Science, 2023, 1811 CCIS : 105 - 118
  • [34] Multi-focus image fusion with a deep convolutional neural network
    Liu, Yu
    Chen, Xun
    Peng, Hu
    Wang, Zengfu
    INFORMATION FUSION, 2017, 36 : 191 - 207
  • [35] Multi-focus Image Fusion Using Deep Belief Network
    Deshmukh, Vaidehi
    Khaparde, Arti
    Shaikh, Sana
    INFORMATION AND COMMUNICATION TECHNOLOGY FOR INTELLIGENT SYSTEMS (ICTIS 2017) - VOL 1, 2018, 83 : 233 - 241
  • [36] Multi-scale weighted gradient-based fusion for multi-focus images
    Zhou, Zhiqiang
    Li, Sun
    Wang, Bo
    INFORMATION FUSION, 2014, 20 : 60 - 72
  • [37] Multi-focus image fusion based on unsupervised learning
    Kaijun Wu
    Yuan Mei
    Machine Vision and Applications, 2022, 33
  • [38] Multi-focus Fusion Method Based on Multi-Scale Local Weighted Variance
    Hao Yifan
    Jian Yi
    ACTA PHOTONICA SINICA, 2021, 50 (12) : 233 - 243
  • [39] Multi-focus image fusion based on unsupervised learning
    Wu, Kaijun
    Mei, Yuan
    MACHINE VISION AND APPLICATIONS, 2022, 33 (05)
  • [40] A review on multi-focus image fusion using deep learning
    Luo, Fei
    Zhao, Baojun
    Fuentes, Joel
    Zhang, Xueqin
    Ding, Weichao
    Gu, Chunhua
    Pino, Luis Rojas
    NEUROCOMPUTING, 2025, 618