Image Restoration via Deep Memory-Based Latent Attention Network

被引:6
|
作者
Zhang, Xinyan [1 ,2 ]
Gao, Peng [3 ]
Zhao, Kongya [1 ,2 ]
Liu, Sunxiangyu [1 ,2 ]
Li, Guitao [1 ]
Yin, Liuguo [2 ]
机构
[1] Tsinghua Univ, Sch Aerosp Engn, Beijing 100084, Peoples R China
[2] Beijing Natl Res Ctr Informat Sci & Technol, Beijing 100084, Peoples R China
[3] Peking Univ, Coll Engn, Beijing 100871, Peoples R China
来源
IEEE ACCESS | 2020年 / 8卷
基金
中国国家自然科学基金;
关键词
Image restoration; deep learning; deep memory-based network; latent attention block; SUPERRESOLUTION; ALGORITHM;
D O I
10.1109/ACCESS.2020.2999965
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep convolutional neural network (CNN) has made impressive achievements in the field of image restoration. However, most of deep CNN-based models have limited capability of utilizing the hierarchical features and these features are often treated equally, thus restricting the restoration performance. To address this issue, the present work proposes a novel memory-based latent attention network (MLANet) aiming to effectively restore a high-quality image from a corresponding low-quality one. The key idea of this work is to employ a memory-based latent attention block (MLAB), which is stacked in MLANet and makes better use of global and local features through the network. Specifically, the MLAB contains a main branch and a latent branch. The former is used to extract local multi-level features, and the latter preserves global information by the structure within a latent design. Furthermore, a multi-kernel attention module is incorporated into the latent branch to adaptively learn more effective features with mixed attention. To validate the effectiveness and generalization ability, MLANet is evaluated on three representative image restoration tasks: image super-resolution, image denoising, and image compression artifact reduction. Experimental results show that MLANet performs better than the state-of-the-art methods on all the tasks.
引用
收藏
页码:104728 / 104739
页数:12
相关论文
共 50 条
  • [31] Input Voltage Mapping Optimized for Resistive Memory-Based Deep Neural Network Hardware
    Kim, Taesu
    Kim, Hyungjun
    Kim, Jinseok
    Kim, Jae-Joon
    IEEE ELECTRON DEVICE LETTERS, 2017, 38 (09) : 1228 - 1231
  • [32] Perceptual image hash function via associative memory-based self-correcting
    Li, Yuenan
    Wang, Dongdong
    Wang, Jingru
    ELECTRONICS LETTERS, 2018, 54 (04) : 208 - 210
  • [33] Interpretable Deep Attention Prior for Image Restoration and Enhancement
    He, Wei
    Uezato, Tatsumi
    Yokoya, Naoto
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2023, 9 : 185 - 196
  • [34] Marlin: A Memory-Based Rack Area Network
    Tu, Cheng-Chun
    Lee, Chao-tang
    Chiueh, Tzi-cker
    TENTH 2014 ACM/IEEE SYMPOSIUM ON ARCHITECTURES FOR NETWORKING AND COMMUNICATIONS SYSTEMS (ANCS'14), 2014, : 125 - 135
  • [35] MemNet: A Persistent Memory Network for Image Restoration
    Tai, Ying
    Yang, Jian
    Liu, Xiaoming
    Xu, Chunyan
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 4549 - 4557
  • [36] Interpersonal memory-based guidance of attention is reduced for ingroup members
    He, Xun
    Lever, Anne G.
    Humphreys, Glyn W.
    EXPERIMENTAL BRAIN RESEARCH, 2011, 211 (3-4) : 429 - 438
  • [37] Hyperspectral image classification via deep network with attention mechanism and multigroup strategy
    Wang, Jun
    Sun, Jinyue
    Zhang, Erlei
    Zhang, Tian
    Yu, Kai
    Peng, Jinye
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 224
  • [38] Search for multiple targets: Evidence for memory-based control of attention
    Yuji Takeda
    Psychonomic Bulletin & Review, 2004, 11 : 71 - 76
  • [39] Preserved memory-based orienting of attention with impaired explicit memory in healthy ageing
    Salvato, Gerardo
    Patai, Eva Z.
    Nobre, Anna C.
    CORTEX, 2016, 74 : 67 - 78
  • [40] Search for multiple targets: Evidence for memory-based control of attention
    Takeda, Y
    PSYCHONOMIC BULLETIN & REVIEW, 2004, 11 (01) : 71 - 76