MACFNet: multi-attention complementary fusion network for image denoising

被引:2
|
作者
Yu, Jiaolong [1 ]
Zhang, Juan [1 ]
Gao, Yongbin [1 ]
机构
[1] Shanghai Univ Engn Sci, Sch Elect & Elect Engn, 333 Longteng Rd, Shanghai 201620, Peoples R China
关键词
Image denoising; Convolutional neural network; Multi-attention mechanism; Complementary fusion; TRANSFORM; SPARSE; CNN;
D O I
10.1007/s10489-022-04313-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent years, thanks to the prosperous development of deep convolutional neural network, image denoising task has achieved unprecedented achievements. However, previous researches have difficulties in keeping the balance between noise removing and textual details preserving, even bringing the negative effect, such as local blurring. To overcome these weaknesses, in this paper, we propose an innovative multi-attention complementary fusion network (MACFNet) to restore delicate texture details while eliminating noise to the greatest extent. To be specific, our proposed MACFNet mainly composes of several multi-attention complementary fusion modules (MACFMs). Firstly, we use feature extraction block (FEB) to extract basic features.Then, we use spatial attention (SA), channel attention (CA) and patch attention (PA) three different kinds of attention mechanisms to extract spatial-dimensional, channel-dimensional and patch-dimensional attention aware features, respectively. In addition, we attempt to integrate three attention mechanisms in an effective way. Instead of directly concatenate, we design a subtle complementary fusion block (CFB), which is skilled in incorporating three sub-branches characteristics adaptively. Extensive experiments are carried out on gray-scale image denoising, color image denoising and real noisy image denoising. The quantitative results (PSNR) and visual effects all prove that our proposed network achieves great performance over some state-of-the-art methods.
引用
收藏
页码:16747 / 16761
页数:15
相关论文
共 50 条
  • [21] Boosting attention fusion generative adversarial network for image denoising
    Qiongshuai Lyu
    Min Guo
    Miao Ma
    Neural Computing and Applications, 2021, 33 : 4833 - 4847
  • [22] Boosting attention fusion generative adversarial network for image denoising
    Lyu, Qiongshuai
    Guo, Min
    Ma, Miao
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (10): : 4833 - 4847
  • [23] Multi-scale Multi-attention Network for Moire Document Image Binarization
    Guo, Yanqing
    Ji, Caijuan
    Zheng, Xin
    Wang, Qianyu
    Luo, Xiangyang
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2021, 90
  • [24] Single image deraining via a recurrent multi-attention enhancement network
    Liu, Yuetong
    Zhang, Rui
    Zhang, Yunfeng
    Yao, Xunxiang
    Han, Huijian
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2023, 113
  • [25] Multi-Attention Network for Sentiment Analysis
    Du, Tingting
    Huang, Yunyin
    Wu, Xian
    Chang, Huiyou
    PROCEEDINGS OF THE 2018 2ND INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING AND INFORMATION RETRIEVAL (NLPIR 2018), 2018, : 49 - 54
  • [26] Multi-attention augmented network for single image super-resolution
    Chen, Rui
    Zhang, Heng
    Liu, Jixin
    PATTERN RECOGNITION, 2022, 122
  • [27] Multi-attention fusion transformer for single-image super-resolution
    Li, Guanxing
    Cui, Zhaotong
    Li, Meng
    Han, Yu
    Li, Tianping
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [28] A multi-attention and depthwise separable convolution network for medical image segmentation
    Zhou, Yuxiang
    Kang, Xin
    Ren, Fuji
    Lu, Huimin
    Nakagawa, Satoshi
    Shan, Xiao
    NEUROCOMPUTING, 2024, 564
  • [29] MATNet: A Combining Multi-Attention and Transformer Network for Hyperspectral Image Classification
    Zhang, Bo
    Chen, Yaxiong
    Rong, Yi
    Xiong, Shengwu
    Lu, Xiaoqiang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [30] Multi-Attention Network for Stereo Matching
    Yang, Xiaowei
    He, Lin
    Zhao, Yong
    Sang, Haiwei
    Yang, Zuliu
    Cheng, Xianjing
    IEEE ACCESS, 2020, 8 : 113371 - 113382