Image Inpainting Algorithm with Diverse Aggregation of Contextual Information

被引:0
|
作者
Li H. [1 ]
Chao Y. [1 ]
Yu P. [1 ]
Li H. [1 ]
Zhang Y. [1 ]
机构
[1] School of Information Science and Engineering, Yunnan University, Kunming
[2] Yunnan Communications Investment and Construction Group Company Limited, Kunming
关键词
diverse aggregation of contextual information; encoding and decoding information fusion; image inpainting; mask matching discriminator;
D O I
10.13190/j.jbupt.2021-317
中图分类号
学科分类号
摘要
To effectively solve the defects of structural distortion and blurry texture when repairing large and irregular semantic missing area images by using the existing algorithms, a diverse aggregation image restoration algorithm based on contextual information is proposed. First, the information on the damaged image is extracted by the encoder to estimate the missing content. Thereafter, the context information from various receptive fields is merged through the multi-information aggregation block to enhance the structure and texture information of the missing area. Then, the original image features are restored through the decoder. Finally, the mask matching discriminator is adopted to perform discrimination training on the generated image, and the model is optimized by combining the counter loss, reconstruction loss, perception loss and style loss to promote the generator to synthesize clear textures. The proposed algorithm is trained and tested on the public data set. The experimental results show that the proposed algorithm can obtain clearer and more reasonable structure and texture details than state-of-the-art when inpainting randomly irregular and large missing areas. Its objective indices such as peak signal-to-noise ratio and structural similarity are superior over the compared algorithms. © 2023 Beijing University of Posts and Telecommunications. All rights reserved.
引用
收藏
页码:19 / 25
页数:6
相关论文
共 16 条
  • [11] YU J H, LIN Z, YANG J, Et al., Free-form image inpainting with gated convolution, IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4471-4480, (2019)
  • [12] JAM J, KENDRICK C, DROUARD V, Et al., R-MNet: A perceptual adversarial network for image inpainting, IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 2714-2723, (2021)
  • [13] ZHU M, HE D L, LI X, Et al., Image inpainting by end-to-end cascaded refinement with mask awareness, IEEE Transactions on Image Processing, 30, pp. 4855-4866, (2021)
  • [14] ZENG Y H, FU J L, CHAO H Y, Et al., Aggregated contextual transformations for high-resolution image inpainting[J/OL], IEEE Transactions on Visualization and Computer Graphics, (2022)
  • [15] DOERSCH C, SINGH S, GUPTA A, Et al., What makes paris look like paris?, Communications of the ACM, 58, 12, pp. 103-110, (2015)
  • [16] LIU Z W, LUO P, WANG X G, Et al., Deep learning face attributes in the wild, IEEE International Conference on Computer Vision(ICCV), pp. 3730-3738, (2015)