Mural inpainting with generative adversarial networks based on multi-scale feature and attention fusion

被引:0
|
作者
Chen Y. [1 ,2 ]
Chen J. [1 ]
Tao M. [1 ]
机构
[1] School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou
[2] Gansu Provincial Engineering Research Center for Artificial Intelligence and Graphics & Image Processing, Lanzhou
基金
中国国家自然科学基金;
关键词
generative adversarial network; image reconstruction; multi-scale feature fusion; mural inpainting; self-attention mechanism;
D O I
10.13700/j.bh.1001-5965.2021.0242
中图分类号
学科分类号
摘要
This study proposes a deep learning model for mural restoration based on generative adversarial networks with multi-scale feature and attention fusions, addressing insufficient feature extraction and detail loss of the existing deep learning image inpainting algorithms during reconstruction. Firstly, a multi-scale feature pyramid network is designed to extract feature information of different scales in mural images, which enhances the feature relevance. Secondly, using the self-attention mechanism and feature fusion module, a multi-scale feature generator is constructed to obtain rich context information and improve the restoration ability of the network. Finally, the minimal confrontation loss and the mean square error are introduced to promote the residual feedback of the discriminator, which completes the mural restoration by combining the feature information of different scales. The experimental results of digital restoration of real Dunhuang murals show that the proposed algorithm can effectively protect important feature information such as the edges and textures, and that the subjective visual effects and objective evaluation indicators are superior to those of the algorithms for comparison. © 2023 Beijing University of Aeronautics and Astronautics (BUAA). All rights reserved.
引用
收藏
页码:254 / 264
页数:10
相关论文
共 20 条
  • [1] WANG H, LI Q Q, JIA S., A global and local feature weighted method for ancient murals inpainting, International Journal of Machine Learning and Cybernetics, 11, 6, pp. 1197-1216, (2020)
  • [2] BERTALMIO M, SAPIRO G, CASELLES V, Et al., Image inpainting, Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 417-424, (2000)
  • [3] CHAN T F, SHEN J H., Nontexture inpainting by curvature-driven diffusions, Journal of Visual Communication and Image Representation, 12, 4, pp. 436-449, (2001)
  • [4] SHEN J H, CHAN T F., Mathematical models for local nontexture inpaintings, SIAM Journal on Applied Mathematics, 62, 3, pp. 1019-1043, (2002)
  • [5] CRIMINISI A, PEREZ P, TOYAMA K., Region filling and object removal by exemplar-based image inpainting, IEEE Transactions on Image Processing, 13, 9, pp. 1200-1212, (2004)
  • [6] LI P, CHEN W G, MICHAEL K N., Compressive total variation for image reconstruction and restoration, Computers and Mathematics with Applications, 80, 5, pp. 874-893, (2020)
  • [7] FAN Y., Damaged region filling by improved criminisi image inpainting algorithm for thangka, Cluster Computing, 22, 6, pp. 13683-13691, (2019)
  • [8] CHEN Y, AI Y P, GUO H G., Improved curvature-driven model of Dunhuang mural restoration algorithm, Journal of Computer-Aided Design and Graphics, 32, 5, pp. 787-796, (2020)
  • [9] YANG X H, GUO B L, XIAO Z L, Et al., Improved structure tensor for fine-grained texture inpainting, Signal Processing:Image Communication, 73, pp. 84-95, (2019)
  • [10] XU S X, LIU D, XIONG Z W., E2I: Generative inpainting from edge to image, IEEE Transactions on Circuits and Systems for Video Technology, 31, 4, pp. 1308-1322, (2021)