Multi-focus;
Image fusion;
Deep learning;
Multi-scale feature extraction;
Residual complementary;
D O I:
10.1007/s10489-024-05983-0
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
Addressing the boundary blurring problem in focus and out-of-focus regions is a key area of research in multifocus image fusion. Effective utilization of multi-scale modules is essential for enhancing performance. Therefore, this paper proposes a multi-stage feature extraction and deep residual complementary multifocus image fusion network. In the feature extraction stage, the V-shaped connection module captures the main objects and contours of the image. The feature thinning extraction module uses extended convolution to learn image details and refine textures at multiple scales. The advanced feature texture enhancement module targets boundary blurring regions, enhancing texture details and improving fusion quality. Asymmetric convolution reduces the network's computational burden, improving feature learning efficiency. The fusion strategy uses a compound loss function to ensure image quality and prevent color distortion. The image reconstruction module uses residual connections with different-sized convolution kernels to maintain feature consistency and improve image quality. The network utilizes a dual-path Pseudo-Siamese structure, which handles image focus and defocus regions separately. Experimental results demonstrate the algorithm's effectiveness. On the Lytro dataset, it achieves AG and EI metric values of 6.9 and 72.5, respectively, outperforming other methods. Fusion metrics SD = 61.80, SF = 19.63, and VIF = 0.94 surpass existing algorithms, effectively resolving the boundary blurring problem and providing better visual perception and broader applicability.